![]() method and client device that performs such method
专利摘要:
DIAGNOSTIC BASE LINE Methods and devices related to the diagnosis of a device in service (DUS) are exposed. DUS related data is received. DUS-related data is determined to be aggregated into aggregated data based on a given classification of DUS-related data. A comparison of aggregated data from DUS-related data and aggregated data is generated. The comparison of aggregate data may include statistical analysis of DUS-related data and / or a differential analysis of DUS-related data taken while DUS is operating in two or more operating states. A DUS report is generated based on the comparison of aggregate data. The DUS report can include one or more sub-strategies. At least one of the one or more sub-strategies can include an estimate of sub-strategy success. The DUS report can be sent and a DUS report view can be generated based on the DUS report. 公开号:BR112013020413B1 申请号:R112013020413-3 申请日:2012-02-20 公开日:2021-02-09 发明作者:Mark Theriot;Patrick S. Merg;Steve Brozovich;Bradley R. Lewis 申请人:Snap-On Incorporated; IPC主号:
专利说明:
RELATED REQUESTS [0001] The present application claims the priority of US Patent Application No. 13 / 031,565, entitled “Diagnostic Baseline”, filed on February 21, 2011, which is incorporated entirely for reference for all purposes. BACKGROUND [0002] Vehicles, such as automobiles, light trucks, and heavy trucks, play an important role in the lives of many people. To keep the vehicles operational, some of these people rely on technicians to diagnose and repair their vehicle. [0003] Vehicle technicians use a variety of tools to diagnose and / or repair vehicles. Those tools may include common hand tools, such as spanners, hammers, pliers, screwdrivers and sets of wrenches, or more vehicle-specific tools, such as cylindrical grinders, piston ring compressors, and vehicle brake tools. Tools used by vehicle technicians may also include electronic tools, such as a vehicle scan tool or a digital voltage-ohm meter (DVOM), for use in diagnosing and / or repairing a vehicle. [0004] The vehicle scanning tool and / or the DVOM can be connected via wired and / or wireless connection (s) to other devices, possibly to communicate data about the vehicle. The vehicle scan tool and / or DVOM can provide a significant amount of data to aid vehicle diagnosis and repair. Typically, the data does not include contextual data, such as a history report. In addition, the data is typically formatted so that interpretation of data by specialized personnel, such as a vehicle technician, is required before a problem with the vehicle can be identified, diagnosed and / or repaired. OVERVIEW [0005] Various example modalities are described in this description. In one respect, an example modality can take the form of a method. On a server device, data related to device in service (DUS) is received for a DUS. DUS-related data is determined to be aggregated into aggregated data on the server device. The determination that DUS-related data should be aggregated is based on a classification of DUS-related data. A comparison of aggregated data from DUS-related data and aggregated data is generated on the server device. A DUS report based on the comparison of aggregated data is then generated on the server device. The DUS report includes one or more sub-strategies. At least one of the one or more sub-strategies includes an estimate of sub-strategy success. The DUS report is then sent by the server device. [0006] In a second aspect, an example modality can take the form of a client device that includes a memory, a processor, and instructions. Instructions are stored in memory. In response to execution by the processor, the instructions cause the client device to perform functions. Functions may include: (a) receiving a diagnostic order from a DUS, (b) sending a DUS test order to the DUS to perform a test related to the diagnostic order, (c) receiving from the DUS, DUS-related data, (d) send DUS-related data, (e) receive a DUS report based on any DUS-related data, and (f) generate a DUS report view of the DUS report. [0007] In a third aspect, an example modality can take the form of a method. A device receives a diagnostic request to diagnose a DUS. A test based on the diagnostic request is determined on the device. The test is related to a first state of operation of the DUS. The device requests the performance of the test in the first state of operation of the DUS. Data from the first operating status for the DUS is received at the device. The data for the first operating state is based on the test. The performance of the test in the second operating state of the DUS is requested by the client device. [0008] The device verifies that the data of the first state of operation is or is not related to the first state of operation. In response to the verification that the data of the first state of operation is related to the first state of operation, the device (a) generates a differential analysis based on the data of the first state of operation, (b) generates a DUS report display based on in the differential analysis, and (c) sends the DUS report display. [0009] These as well as other aspects and advantages will be apparent to those of ordinary knowledge in the art by reading the following detailed description, with reference, where appropriate, to the attached drawings. Still, it should be understood that the modalities described in this overview and elsewhere are intended to be examples only and do not necessarily limit the scope of the invention. BRIEF DESCRIPTION OF THE DRAWINGS [00010] Example modalities are described here with reference to the drawings, in which the same numbers denote the same components, in which: [00011] Figure 1 is a block diagram of an example system; [00012] Figure 2 is a block diagram of an example computing device; [00013] Figure 3 is a block diagram of an example client device; [00014] Figure 4 is a block diagram of an example server device; [00015] Figure 5 represents an example data collection display; [00016] Figure 6A shows an example scenario for processing a diagnostic request, responsively generating a DUS report display, and receiving data related to success; [00017] Figure 6B shows an example scenario for processing data related to DUS and responsively generating a diagnostic request; [00018] Figure 6C shows an example scenario for processing data related to DUS and responsively generating a DUS report; [00019] Figure 6D shows another example scenario for processing data related to DUS and responsively generating a DUS report; [00020] Figure 7A shows an example scenario for processing a diagnostic request, responsively generating a DUS report display, and receiving data related to success; [00021] Figure 7B shows an example scenario for processing a diagnostic request and responsively generating a DUS test request; [00022] Figure 7C shows an example scenario for processing data related to DUS and responsively generating a DUS report display; [00023] Figure 8A represents an example flowchart that illustrates functions for generating a differential analysis; [00024] Figure 8B shows an example grid with a grid cell that corresponds to a first operating state and a grid cell that corresponds to a second operating state; [00025] Figure 9 represents an example flowchart that illustrates functions that can be performed according to an example modality; [00026] Figure 10 is another flowchart representing functions that can be performed according to an example modality; and [00027] Figure 11 is yet another flowchart representing functions that can be performed according to an example modality. DETAILED DESCRIPTION INTRODUCTION [00028] This description exposes systems comprising multiple devices. Each device in a described system is operable independently (for example, as a standalone device) as well as in combination with other devices in the system. Each device in a described system can be referred to as an apparatus. [00029] Each device in a described system is operable to perform functions for maintaining a DUS (DUS). The DUS can comprise a vehicle, a refrigeration unit, a personal computer, or some other device on which maintenance can be performed. Additionally or alternatively, the DUS may comprise a system, such as a heating, ventilation, and air conditioning (HVAC) system, a security system, a computer system (for example, a network), or some other system where maintenance can be done. DUS maintenance functions may include, but are not limited to, diagnostic functions, measurement functions, and scan functions. [00030] To work in combination with each other, the device of a described system is configured to communicate with another device via a communications network. The communications network can comprise a wireless network, a wired network, or both, a wireless network and a wired network. Data obtained by a device from the DUS or data contained in it in another way the device can be transmitted to another device through the communication network between those devices. [00031] Devices in the described system can be connected via wired and / or wireless connections. Wired and wireless connections can use one or more communication protocols arranged in accordance with one or more standards, such as one from SAE International, International Organization for Standardization (ISO), or standard 802 from the Institute of Electrical and Energy Engineers Electronics (IEEE). The wired connection can be established using one or more wired communication protocols, such as the Diagnostic On Board II (“OBD-II”) protocol series (for example, SAE J1850, SAE J2284, ISO 9141-2, ISO 14230, ISO 15765), IEEE 802.3 (“Ethernet”), or IEEE 802.5 (“Token Ring”). The wireless connection can be established using one or more wireless communication protocols, such as Bluetooth, IEEE 802.11 (“Wi-Fi”), or IEEE 802.16 (“WiMax”). [00032] A client device of the described system is configured to communicate directly with the DUS; in part by sending test requests for diagnostic information and receiving test-related data in response. In some modalities, test requests and / or test data are formatted according to an OBD-II protocol. [00033] The client device can request DUS operation in a variety of operating conditions to collect test related data. The conditions and amount of data collected can be customized based on a type of user (for example, service technician, lay person, engineer, etc.) using a customer device. The customer device can also collect a “complaint” about the DUS, which can include text about the DUS operating condition. In some embodiments, the claim can be specified as a “claim code”, such as an alphanumeric code. [00034] A server device of the described system is configured to receive the data related to the test and possibly "data related to the device" to the DUS and to responsively generate a "DUS report". The DUS report may include a “strategy” for diagnosing and / or repairing the DUS to address a complaint. The strategy may include a “statistical analysis” of the data related to the test received and / or one or more “sub-strategies” (for example, recommendations, directions, proposed actions, and / or additional tests). Device-related data may include, but is not limited to, data for device assembler, device manufacturer identity, device model, device manufacturing time (e.g., model year), mileage, control unit report of the device (for example, for a vehicle, type of engine control unit (ECU) and launch information), operating time data, device identity information, identity information of the device owner, service provider , service location, service technician, and / or device location. In some embodiments, the client device can generate the DUS report. [00035] The server device and / or the client device can create a "profile" for the DUS. The profile can be configured to store data related to the device referring to the DUS, complaints, DUS reports, and / or data related to the test taken at various times during the life of the DUS. [00036] In some modalities, “reference points” or “reference data” are stored, possibly with the profile. The reference data can be taken during an operating interval without complaints from the DUS. Reference data may include data provided by an original equipment manufacturer (OEM) with respect to the ideal / recommended operation. [00037] In some modalities, a “data logger” can be installed in the DUS to collect the reference data. In these modalities, once reference data is collected, the reference data can be communicated from the data recorder to a “service provider” responsible for diagnosing, maintaining, and / or repairing the DUS. In other modalities, the service provider collects baseline data at a service facility. [00038] The reference data can be compared with data related to the test taken in response to a complaint about the DUS. In addition, reference data and / or data related to testing from a number of devices in service can be combined and / or aggregated into a set of “aggregated data”. The aggregation process may include determining a classification and / or reliability for the aggregate data. [00039] Data related to the test can be “classified” before being added to the aggregate data. Sample ratings of aggregated data may include a rating of reference data and one or more diagnostic ratings. For example, if the DUS is operating without complaint, data related to the test, obtained from the DUS, can be classified as reference data in the aggregation to the aggregated data. However, if the DUS is operating with a failure in the braking system, data related to the test from the DUS can be classified as “defective brake” data in the aggregation to the aggregate data. Many other types of classifications are also possible. [00040] Reference data for a DUS and possibly another data can be used to generate “baseline data” for a DUS. Baseline data may include a statistical summary of the data taken for devices that share “core-device information” (for example, year, model, brand, and / or type of ECU / release information). Baseline data can arrive aggregated and / or updated over time. For example, the more test-related data that is aggregated for in-service devices that share core-device information, the baseline data may have higher confidence values and / or intervals for aggregated baseline data on the time. [00041] Data other than baseline data that share a common classification can also be aggregated. For example, the “defective brake” data can arrive aggregated and, the more defective brake data is aggregated over time, the defective brake data may have confidence values and / or increasingly higher ranges for aggregate brake data. faulty brake over time. Aggregation of data for other classifications is also possible. [00042] The DUS report can be generated based on a comparison of the data related to the test and aggregated data. For example, the server device can store aggregated data, possibly including core-device reporting and / or baseline data for the DUS. Upon receipt of the test-related data and a claim, the server device can determine a subset of the aggregated data based on the device-related data for the DUS. Then, the test related data and the determined subset of aggregated data can be compared to determine the statistical analysis, including a number of statistics for the test related data, for the DUS report. [00043] In some modalities, statistical analysis can be generated based on a "differential analysis" or comparison of the DUS operated in one or more "operating states". Example operating states for a vehicle engine (acting as a DUS) include unloaded / lightly charged operating states (for example, a “slow idle” operating state), various operating states under normal loads (for example, example, a “cruise” operating state, a “start” operating state), and operating states at, or close to, the maximum load (for example, a “high speed” operating state). Other operating states are also possible. Then, the statistical analysis can be determined based on differences in the data related to the test when DUS operated in the operating state at two or more different times and / or between two or more different measurements. [00044] A “rules engine” can take the statistical analysis and report on the complaint, evaluate the statistical analysis in the light of one or more rules about the DUS and the complaint data, and provide a strategy to investigate the complaint. In some embodiments, the rules engine may include an inference engine, such as a specialist system or problem-solving system, with a knowledge base that relates to at least the assessment, diagnosis, operation, and / or repair of the DUS. [00045] The rules engine can generate the DUS report, possibly by combining the statistical analysis and the strategy to address the complaint. The client device and / or the server device can include the rules engine. In some embodiments, the client device rules engine differs from the server device rules engine. [00046] The DUS report can be displayed, possibly on the client device, possibly to enable the realization of a DUS report strategy. Feedback related to the strategy's sub-strategies can be provided and used to adjust an “estimate of sub-strategy success” or probability that the sub-strategy can address a problem mentioned in the complaint data. Such sub-strategy success estimates can be provided with the DUS report. [00047] By conducting statistical analyzes of aggregated data (including baseline data) test-related data, this system can evaluate test-related data in the context of a larger population of data. By comparing the test-related data with the classified aggregate data and / or baseline data, any discrepancies between the classified aggregate data and / or baseline data and the test-related data, as shown in the statistical analysis, can be more easily identified and thus the diagnosis of speed and repair of the DUS. By performing differential analyzes using a test procedure of merely operating a DUS in an operating state and / or at two (or more) different times / sources (for example, aggregated data / baseline and data related to test), the initial diagnostic procedures can be simplified. A strategy provided with the DUS report can include sub-strategies for diagnosing and / or repairing the DUS, further reducing the time for repair. Through the provision of statistical analysis and / or differential analysis based on a population of data, together with a strategy for diagnosis and repair, the report of the device under repair greatly reduces, if not eliminates, conjectures about the data related to the test, and reduces downtime for the device under repair. II. EXAMPLE ARCHITECTURE [00048] Figure 1 is a block diagram of an example system 100. System 100 comprises a device in service (DUS) 102 and devices 104 and 106. For purposes of this description, device 104 is referred to as a device client and device 106 is referred to as a server device. [00049] The block diagram in Figure 1 and other diagrams and flowcharts that accompany this description are provided as examples only and are not intended to be limiting. Many of the elements illustrated in the figures and / or described here are functional elements that can be implemented as discrete or distributed components or in conjunction with other components, and in any appropriate combination and locations. Those skilled in the art will appreciate that other arrangements and elements (for example, machines, interfaces, functions, orders, and groupings of functions, etc.) can, however, be used. In particular, some or all of the features described here as client device functionality (and / or component functionality of a client device) may be implemented by a server device, and some or all of the features described here as device functionality server (and / or component functionality of a server device) can be implemented by a client device. [00050] In addition, various functions, techniques, and features described here as being performed by one or more elements can be performed by a processor that executes computer-readable program instructions and / or any combination of hardware, firmware and software. [00051] The DUS 102 may comprise a vehicle, such as an automobile, a motorcycle, a semi-tractor, an agricultural machine, or some other vehicle. System 100 is operable to perform a variety of functions, including maintenance functions for DUS 102. The example modalities can include or be used with any appropriate voltage or current source, such as a battery, an alternator, a fuel cell , and the like, providing any appropriate current and / or voltage, such as about 12 volts, about 42 volts, and the like. The sample modalities can be used with any desired engine or system. Those systems or engines may comprise items that use fossil fuels, such as gasoline, natural gas, propane, and the like, electricity, such as that generated by battery, magnet, fuel cell, solar cell and the like, wind and hybrids or combinations of themselves. Those systems or engines can be incorporated into other systems, such as an automobile, a truck, a dinghy or ship, a motorcycle, a generator, an aircraft and the like. [00052] Client device 104 and / or server device 106 can be computing devices, such as example computing device 200 described below in the context of Figure 2. In some embodiments, client device 104 comprises the digital volt meter (DVM), digital volt ohm meter (DVOM), and / or some other type of measuring device. [00053] Network 110 can be established to connect devices 104 and 106 communicatively. Any of these devices can communicate over network 110, once when device establishes a connection or connection to network 110. As an example, the Figure 1 shows network 110 connected to: client device 104 through connection 114 and device 106 connected through connection 116. In modalities not shown in figure 1, DUS 102 can also be connected to network 110. [00054] Network 110 may include and / or connect to a data network, such as a wide area network (WAN), a local area network (LAN), one or more public communication networks, such as the Internet, one or more private communication networks, or any combination of such networks. Network 110 may include wired and / or wireless connections and / or devices using one or more communication protocols arranged in accordance with one or more standards, such as that of SAE International, International Organization for Standardization (ISO), or Standard of the Institute of Electrical and Electronics Engineers (IEEE). [00055] Network 110 can be arranged to perform communications according to the respective air interface protocol. Each air-interface protocol can be arranged according to an industry standard, such as an 802 standard from the Institute of Electrical and Electronics Engineers (IEEE). The IEEE 802 standard can comprise an IEEE 802.11 standard for Wireless Local Area networks (for example, IEEE 802.11 a, b, g or n), an IEEE 802.15 standard for Wireless Personal Area Networks, an 802.15 standard .1 IEEE for Wireless Personal Area Networks - Task Group 1, an IEEE 802.15.4 standard for Wireless Personal Area Networks - Task Group 4, an IEEE 802.16 standard for Metropolitan Wireless Area Networks Broadband, or some other IEEE 802 standard. For the purposes of this description, a wireless network (or connection) arranged to perform communications in accordance with an IEEE 802.11 standard is referred to as a Wi-Fi network (or connection), a wireless network (or connection) arranged to perform communications according to an IEEE 802.15.1 standard is referred to as a Bluetooth network (or connection), a wireless network (or connection) arranged to perform communications according to an standard IEEE 802.15.4 is referred to as a Zigbee network (or link), and a wireless network (or connection) arranged to perform communications in accordance with an IEEE 802.16 standard is referred to as a Wi-Max network (or connection). [00056] Network 110 can be arranged to perform communications according to a wired communication protocol. Each wired communication protocol can be arranged according to an industry standard, such as IEEE 802.3 (“Ethernet”) or IEEE 802.5 (“Token Ring”). For the purposes of this description, a wired network (or link) arranged to perform communications under an OBD-II protocol is referred to as an OBD-II network (or link), a wired (or link) network arranged to perform communications according to an IEEE 802.3 standard is referred to as an Ethernet network (or connection), and a wired network (or connection) arranged to perform communications according to an IEEE 802.5 standard is referred to as a network (or connection) ) of Token Ring. [00057] As such, wireless connections to network 110 can be established using one or more wireless air interface communication protocols, such as, but not limited to, Bluetooth, Wi-Fi, Zigbee and / or WiMax. Similarly, wired connections to network 110 can be established using one or more wired communication protocols, such as, but not limited to, Ethernet and / or Token Ring. As such, connections 114 and 116 can be wired and / or wireless connections to network 110. Additional connections and / or wired and / or wireless protocols now known or later developed can also be used on network 110. [00058] In some embodiments not shown in Figure 1, wired and / or wireless, point-to-point connections, can be established between client device 104 and server device 106. In such embodiments, a feature described here from network 110 can be performed by these point-to-point connections. In other modalities not shown in Figure 1, additional devices not shown in Figure 1 (for example, the computing device, smart phone, personal digital assistant, telephone, etc.), can communicate with the DUS 102, the client 104, server device 106, and / or other devices over network 110. Client device 104 and / or server device 106 can operate to communicate the data described here, reports, orders, queries, profiles, displays, analyzes, and / or other data (for example, auto repair data and / or instructional data) for one or more of these additional devices not shown in Figure 1. [00059] Client device 104 can connect to DUS 102 via connection 112. In some embodiments, connection 112 is a wired connection to DUS 102, possibly an OBD-II connection or Ethernet connection. In other embodiments, connection 112 is a wireless connection. In some of these modalities, the wireless connection is configured to carry at least data formatted according to an OBD-II protocol. For example, an “OBD-II scanner” can be used to carry OBD-II data over a wireless connection. The OBD-II scanner is a device with a wired OBD-II connection to the DUS 102 and a wireless transmitter. The OBD-II scanner can retrieve data formatted according to an OBD-II protocol from the DUS 102 and transmit the OBD-II formatted data over a wireless connection (for example, a Bluetooth connection, Wi-Fi , or Zigbee) established using a wireless transmitter. An OBD-II scanner, for example, is the VERDICT S3 Wireless Scanner Module manufactured by Snap-on Incorporated of Kenosha, WI. In some modalities, protocols other than an OBD-II protocol, now known or later developed, can be specific data and / or transmission formats. [00060] In another example, a data logger (not shown in Figure 1) can be used to collect data from DUS 102 during operation. Once when the connection 112 to the DUS 102 is connected, the data logger can communicate the collected data to the client device 104 and possibly to the server device 106. [00061] Figure 2 is a block diagram of an example computing device 200. As illustrated in Figure 2, computing device 200 includes a user interface 210, a communication network interface 212, a processor 214, and data storage 216, all of which can be connected together via a system bus, network, or other connection mechanism 220. [00062] User interface 210 is operable to present data to and / or receive data from a user of computing device 200. User interface 200 may include input unit 230 and / or output unit 232. A input unit 230 can receive input, possibly from a user of computing device 200. input unit 230 can comprise a keyboard, a keypad, a touch screen, a computer mouse, a control sphere, a joystick, and / or other similar devices, now known or developed later, capable of receiving input into computing device 200. [00063] Output unit 232 may provide output, possibly to a user of computing device 200. Output unit 232 may comprise a visible output device for generating visual output (s), such as one or more cathode ray tubes (CRT), liquid crystal displays (LCD), light emitting diodes (LEDs), displays using digital light processing technology (DLP), printers, light bulbs, and / or other similar devices , now known or developed later, capable of displaying graphic, textual, and / or numerical information. Output unit 232 may alternatively or additionally comprise one or more auditory output devices for generating audible output (s), such as a speaker, speaker jack, audio output port, audio output device, headphones, and / or other similar devices, now known or developed later, capable of carrying audible and / or audible information. [00064] Communication network interface 212 may include wireless interface 240 and / or wired interface 242, possibly for communication over network 110 and / or through point-to-point link (s). Wireless interface 240 may include a Bluetooth transceiver, a Zigbee transceiver, a Wi-Fi transceiver, a WiMAX transceiver, and / or some other type of wireless transceiver. Wireless interface 240 can communicate with devices 102, 104, 106, network 110, and / or other device (s) configured to communicate wirelessly. [00065] The wired interface 242 can be configured to communicate according to a wired communication protocol (for example, Ethernet, OBD-II Token Ring) with devices 102, 104, and / or 106, network 110 , and / or other device (s) configured to communicate over wired connections. The wired interface 242 may comprise a port, a metallic wire, a cable, an optical fiber connection or a physical connection similar to devices 102, 104, 106, network 110, and / or other configured device (s) (s) to communicate via metallic wire. [00066] In some embodiments, the wired interface 242 comprises a Universal Serial Bus (USB) port. The USB port can connect communicatively to a first end of a USB cable, while a second end of the USB cable can connect communicatively to a USB port on another device connected to network 110 or some other device. In other embodiments, the wired interface 242 comprises an Ethernet port. The Ethernet port can connect communicatively to a first end of an Ethernet cable, while a second end of the Ethernet cable can connect communicatively to an Ethernet port on another device connected to network 110 or some other device. [00067] In some embodiments, the communication network interface 212 can provide reliable, secure, and / or authenticated communications. For each communication described here, a report to ensure reliable communications (ie guaranteed message issuance) may be provided, possibly as part of a message header and / or footer (for example, packet / message sequencing information, header ( s) and / or footer (s) of encapsulation, size / time information, and transmission check report, such as cyclic redundancy check (CRC) and / or parity check values). Communications can be made secure (for example, encrypted or encrypted) and / or decrypted / decrypted using one or more encryption protocols and / or algorithms, such as, but not limited to, DES, AES, RSA, Diffie-Hellman and / or DSA. Other protocols and / or cryptographic algorithms can also be used or in addition to those listed here, to protect (and then decrypt / decrypt) communications. [00068] Processor 214 may comprise one or more general purpose processors (for example, microprocessors manufactured by Intel or Advanced Micros Devices) and / or one or more special purpose processors (for example, digital signal processors). Processor 214 can execute computer-readable program instructions 250 that are contained in data store 216 and / or other instructions, as described here. [00069] Data storage 216 may comprise one or more computer-readable storage media, which can be read by at least the processor 214. The one or more computer-readable storage media may comprise volatile and / or storage components non-volatile, such as optical, magnetic, organic or other memory, or disk storage, which can be integrated in whole or in part with the 214 processor. In some embodiments, data storage 216 is implemented using a physical device (for example, example, an optical, magnetic, organic or other memory or disk storage unit), while, in other embodiments, data storage 216 is implemented using two or more physical devices. [00070] Data store 216 may include computer-readable program instructions 250 and possibly date. Computer-readable program instructions 250 may include instructions executable by processor 214 and any storage required, respectively, to perform at least part of the techniques described herein and / or at least part of the functionality of the devices and networks described herein. [00071] Figure 3 is a block diagram of an example client device 104. Client device 104 can include communications interface 310, user interface 320, feedback collector 330, rules engine 340, data collector 350, text analyzer 360, data analyzer 370, and data storage interface 380, all connected through interconnection 390. [00072] The component arrangement of the client device 104, as shown in Figure 3 is an example arrangement. In other embodiments, the client device 104 can use more or less components than shown in Figure 3 to realize the functionality described here of the client device 104. [00073] The communications interface 310 is configured to allow communications between the client device 104 and other devices, possibly including allowing reliable, secure and / or authenticated communications. An example communication interface 310 is the communication network interface 212, described above in the context of Figure 2. User interface 320 is configured to allow communications between client device 104 and a user of client device 104, including , but not limited to, communication reports (including data, analysis, strategies, and / or sub-strategies), orders, messages, feedback, reports and / or instructions, described here. An example user interface 320 is user interface 210, described above in the context of Figure 2. [00074] The feedback collector 330 is configured to request, receive, store, and / or retrieve “feedback” or entry in reports, strategies, and / or sub-strategies, as described here. [00075] The rules engine 340 is configured to receive data, analyze the received data, and generate the corresponding DUS reports. The data received may include, but is not limited to, complaint data, unprocessed data related to the test, processed data related to the test (for example, a statistical or differential analysis of the data related to the test), reference data, determinations classifications / reliability of received data, predetermined data values (for example, “hard-coded data”), and / or other data. [00076] The data received can be analyzed by adapting the data received to one or more rules and / or an existing population of data. The one or more rules can be stored in diagnostic rules and strategy database. A query of the rules and strategy database with respect to data received (for example, the complaint) may return one or more rules and possibly one or more sub-strategies refer to the complaint. [00077] Each of the one or more related rules can "trigger" (for example, become applicable) based on the data received and / or the existing population of data. For example, a query to the database of rules and strategy may include a complaint related to “turbulent engine performance”. The returned rules may include a first fuel flow related rule that can be triggered based on the received fuel flow data and / or additional received data related to the DUS performance related to the fuel flow. As another example, a second rule related to fuel flow can be triggered when comparing received fuel flow data with an “aggregate” population (for example, previously stored) of fuel flow data. Many other rules and / or examples are also possible. [00078] Based on the data received, the rules engine 340 can determine which rule (s) should be triggered to determine one or more responses associated with the triggered rules. One possible answer includes a set of one or more sub-strategies for diagnosing and / or repairing the DUS 102. Example sub-strategies include recommendations for “replacing the fuel flow sensor” or “inspecting the fuel filter”. Other example sub-strategies include request (s) for one or more additional tests to be performed on the DUS 102; for example, "test the battery voltage" or "operate the engine at 2500-3000 revolutions per minute (RPMs)". The request (s) for additional testing (s) may include instructions (that is, instructions for a technician, test parameters, and / or computer language instructions / commands) to perform the test (s) additional test (s) on the DUS. [00079] The one or more responses may include diagnostic data, such as, but not limited to one or more values of received data, aggregated data, and / or baseline data, one or more comparisons between received data, the aggregated data, and / or baseline data, and one or more values and / or comparisons of similar data values in the aggregated data, and / or baseline data, previously received. Other responses, sub-strategies, diagnostic data, and / or examples are also possible. [00080] A sub-strategy can be associated with a sub-strategy success estimate expressed as a percentage (for example, “Recommendation 2 is likely to be 28% successful”), such as a sub-order list strategies (for example, a higher-order sub-strategy would have a better estimate of sub-strategy success than a lower-order sub-strategy), such as a numerical and / or textual value, (for example, “A action 1 has a grade of 95, which is a sub-strategy 'A' ”), and / or by some other type of expression. [00081] The sub-strategy success estimate for a certain sub-strategy can be adjusted based on the feedback, such as the feedback collected by the feedback collector 330, for a given sub-strategy. As an example: suppose that a response included three sub-strategies: SS1, SS2 and SS3. Suppose further that feedback was received that SS1 was unsuccessful, SS2 was successful, and SS3 was not used. In response, the sub-strategy success estimate for SS1 can be reduced (ie treated as unsuccessful), the sub-strategy success estimate for SS2 can be increased (ie treated as successful), and the sub-strategy success estimate for SS3 can be maintained. Other feedback-based adjustments are also possible. In some embodiments, sub-strategy failure estimates can be determined, stored, and / or adjusted in place of (or in conjunction with) sub-strategy success estimates; for example, in these modalities, sub-strategy failure estimates can be adjusted downwards when corresponding sub-strategies are used successfully, and adjusted upwards when corresponding sub-strategies are used successfully. [00082] In still other modalities, one or more determined sub-strategies in the set of one or more sub-strategies can be excluded from a DUS report. As an example, a maximum number of MaxSS sub-strategies can be provided, and sub-strategies in addition to the maximum number of MaxSS sub-strategies could be excluded. Sub-strategies could then be selected based on a number of criteria; for example, the first (or last) MaxSS sub-strategies generated by the rules engine 340, random selection of MaxSS sub-strategies, based on sub-strategy success estimates, and / or based on sub-failure estimates -strategy. As another example, sub-strategies whose sub-strategy success estimate did not exceed a threshold success estimate value (or failed to exceed a limit failure estimate value) can be excluded. Combinations of these criteria can be used; for example, selecting the first MaxSS sub-strategies that exceeded the threshold success estimate value or selecting the MaxSS sub-strategies that exceeded the threshold success estimate value and have the highest sub-strategy success estimates among all of the sub-strategies. [00083] The DUS report may include part or all of the statistical analysis, the diagnostic data, and some or all of the sub-strategies of DUS 102. Other information, such as a report related to DUS 102 and / or complaint data may be provided with the DUS report. [00084] An example display of a DUS report is shown in table 1 below: TABLE 1 [00085] Although this DUS Report, for example, is shown with a report only in text form, additional types of data can be used in the reports described here (including, but not limited to, DUS reports) as such, but not limited to visual / graphic / image data, video data, audio data, links and / or other address reports (for example, Uniform Resource Locators (URLs), Uniform Resource Indicators (URIs), Internet Protocol (IP) addresses, Media Access Layer (MAC) addresses, and / or other address information), and / or computer instructions (for example, Hypertext Transfer Protocol (HTTP Transfer Protocol) ), eXtended Markup Language (XML), Flash ®, Java ™, JavaScript ™, and / or other computer language instructions). [00086] The data collector 350 can coordinate test activities for one or more tests carried out during a “data collection session” of the DUS 102 test. To perform a test of a data collection session, the data collector 350 can issue “DUS test requests” or requests for DUS 102 related data and receive “DUS test related data” in response. A DUS test order can be related to one or more “tests” for DUS 102. In this context, a DUS 102 test can include performing one or more activities (for example, repair, diagnostics) on DUS 102, collecting data from DUS 102 (for example, obtaining data from one or more DUS 102 sensors), and / or receiving device-related data to DUS 102 (that is, receiving device-related data through a user interface or through a communication network interface). Data related to the DUS test for each test performed during the data collection session can be combined into “DUS related data” collected during the entire data collection session. DUS-related data can include data obtained through a data logger operating to collect data during DUS 102 operation. [00087] Some DUS test requests can be made according to an OBD-II protocol, possibly through communications using an OBD-II message format. An OBD-II message format can include: frame start and end frame data, a message identifier, a remote message related identifier, a confirmation flag, cyclic redundancy check (CRC) data, and data of paid cargo of OBD-II. OBD-II payload data can include a control field indicating a number of bytes in an OBD-II payload field, and the OBD-II payload field. The OBD-II payload field can specify an OBD-II mode, an OBD-II parameter (PID), and additional payload data. OBD-II modes, for example, include, but are not limited to, modes for: showing current data, showing frozen frame data, showing one or more previously recorded data frames (for example, “movies” of data from OBD-II), show stored Diagnostic Problem Codes (DTCs), clear DTCs and stored values, test results for monitoring the oxygen sensor, test results for other components, show DTCs detected during the current driving cycle or last driving cycle, control the operation of the component / system on board, request vehicle information mode, and a permanent / off DTC mode. OBD-II PIDs, for example, include, but are not limited to, frozen DTC, fuel system status, engine coolant temperature, fine fuel regulation, fuel pressure, engine revolutions / minute (RPMs ), vehicle speed, synchronization advance, and intake air temperature. Many other modes of OBD-II and PIDs of OBD-II are also possible. [00088] When data related to DUS test orders is collected, data collector 350 can update a data collection view to show the progress of collecting and testing the data. An example data collection view is described in more detail with reference to Figure 5 below. The completion of data collection can be determined by a rules engine (for example, rules engine 340 and / or rules engine 440). [00089] The data collector 350 can receive and store data related to the test, possibly in a “DUS profile” associated with the DUS 102. The DUS profile can be created, updated, and / or deleted by the data collector 350 . [00090] Along with the test related data, the DUS profile can be configured for updated and / or created data to store device related data, complaint data, DUS reports, and / or test related data taken at various times during the life of the DUS (for example, baseline data). As such, the data stored in the DUS profile can be a service history of a DUS 102. In some embodiments, data collector 350 can generate a DUS profile report of the service history. An example DUS profile report is shown in table 2. TABLE2 [00091] As indicated above, the DUS profile report can include a reference to the data related to the test, obtained at various times, related to DUS 102. In other modalities, some or all of the data obtained related to the test can be included directly in the DUS profile report. In still other modalities, the DUS profile report does not include references to data related to the test. [00092] The 360 text analyzer can perform a "textual analysis" of the complaint data; that is, the 360 text analyzer can analyze a word, or otherwise examine the complaint data to find keywords and / or phrases related to the DUS 102 service (ie test, diagnosis and / or repair) For example, suppose the claim data included in a statement that “My car is running turbulently at idle, but it works smoothly at cruising speed”. Complaint data can be analyzed by keywords / phrases, such as “running turbulently,” “idling”, “running smoothly” and “cruising speed” to order one or more tests of the overall engine performance ( for example, based on the terms "running turbulently" and "running smoothly") both in a "slow idle" condition and in a "cruising speed" condition. Many other examples of keywords / phrases, complaint data, word analysis / examination of complaint data, and / or test requests are also possible. [00093] In some modalities, the complaint can be specified using a "complaint code". For example, a claim that can be specified as an alphanumeric code could be used; for example, Code E0001 represents a general engine failure, code E0002 represents an engine with turbulent idling, etc. In these modalities, the complaint data may include the complaint code. In some of the modalities, the 360 text analyzer can generate one or more complaint codes as a result of the textual analysis of the complaint. [00094] The data analyzer 370 can analyze data related to DUS 102. In particular, the data analyzer can generate a "statistical analysis" comparing the received data related to DUS 102 and an existing population of data. The existing population of data may include, but is not limited to, aggregated data, reference data, and / or stored data related to DUS 102. Reference data may include data from the manufacturer, component supplier, and / or other sources indicating the expected data values for the DUS 102 when DUS 102 is operating normally. The stored data related to the DUS 102 can include data for the test on the device 102, captured and stored in time (s) before the reception of the received data. This stored data can include baseline data for DUS 102. Baseline data can then be stored, and possibly used for comparisons with data taken during the operation related to a claim (for example, data related to the claim). accompanied with the complaint data). In some embodiments, the aggregated data may include some or all of the reference data and stored data related to DUS 102. As such, the aggregated data can be treated as the existing data population. [00095] Statistical analysis may include combining received data with a subset of the existing population of data, such as by combining received data for a given DUS, with an existing population of data for the device (s) ) that share the same core device information (for example, year, model, brand, ECU information) with the DUS data. Many other types of subset conjugation of the existing data population are also possible, such as using information other than core-device information, narrowing a subset of data, and / or expanding a subset of data. [00096] An example of narrowing the data subset includes filtering the data subset for a particular ECU issue. Example expansions of the data subset include: adding similar vehicle models sharing core-device information, adding data from previous and / or later years, and / or adding data from different well-known brands that are manufactured by a common manufacturer . Many other examples of subset conjugation, narrowing of subsets of data, and expansion of subsets of data are also possible. [00097] Statistical analysis may include indications of the combination of values between the data received and the existing data population, range (s) of values from the existing data population and a comparison of data received for the range (eg determining whether the temperature of the cooling agent for the existing population of data is between 68.3 and 79.4 ° C (155 ° F and 175 ° F) and the temperature received from the cooling agent of 71.1 ° C (160 ° F ) is within this range), and / or determine whether statistics for the received data and / or the existing population of data (for example, mean, median, mode, variance, and / or standard deviation). Statistical analysis can include analysis of data from one or more sensors and / or one or more types of data (for example, analysis of both fine-tuned fuel and fuel pressure data). [00098] Statistical analysis may include comparisons of data received from DUS 102 over time. For example, the received data can be compared with the baseline data for DUS 102 to generate statistical analysis and / or a differential analysis between the baseline data of the received data. In generating statistical analysis and / or differential analysis, data analyzer 370 can use one or more of the techniques for classifying test-related data, as discussed below in the context of Figure 4. For example, data analyzer 370 can classify one or more data values as baseline data. [00099] In addition or instead of the received data generated within a test time interval (for example, the received data includes a number of data samples that are collected at various times during the test time interval) can be statistically analyzed; for example, to determine statistics within the test range, to remove or determine data points that lie outside, and / or for other types of statistical analysis. In some embodiments, reference data and / or aggregated data can be used as baseline data. In still other modalities, the data in the existing data population can be statistically analyzed within the test time intervals. [000100] The data storage interface 380 is configured to store and / or retrieve data and / or instructions used by client device 104. One data storage interface 380, for example, is data storage 216, described above in the context of Figure 2. [000101] Figure 4 is a block diagram of an example server device 106. Server device 106 can include communications interface 410, data aggregator 420, data analyzer 430, rules engine 440, the text analyzer 450, the data collector 460, the feedback collector 470, and the data storage interface 480, all connected through the 490 interconnect. [000102] The component arrangement of the server device 106, as shown in Figure 4, is an example arrangement. In other embodiments, the server device 106 can use more or less components than shown in Figure 4 to perform the functionality described here of the server device 106. [000103] Communications interface 410 is configured to allow communications between server device 106 and other devices. A communication interface 410, for example, is the communication network interface 212, described above in the context of Figure 2. [000104] Data aggregator 420 can create, update, and / or delete a DUS profile associated with DUS 102 and possibly generate a related DUS profile report using the techniques described above with respect to Figure 3. Thus, client device 104 and / or server device 106 can maintain DUS profile (s) and generate the DUS profile report (s). [000105] The data aggregator 420 can classify data based on a complaint. For example, all test-related data, which relates to complaints about DUSs that fail to start can be classified as data related to “failing to start” complaints. When aggregating a dataset sharing a common classification, a portion of the data can be retained as aggregated data. For example, data in the “failing to start” classification related to starters, batteries, and electrical systems could be aggregated. Other data can also be aggregated, or aggregated in another classification, and / or discarded. For example, data that is unlikely to be correlated to a claim can be reclassified and aggregated based on the reclassification. In this context, data related to tire pressures, carried as part of data related to the “failing to start” test, could be reclassified as “tire pressure data” and then aggregated. Many other types of aggregation based on claim-driven classifications are also possible. [000106] The data aggregator 420 can classify data related to the test based on reliability. The classification of test-related data for reliability may include comparing data values from test-related data with reference and / or baseline data. Some example techniques for comparing data values with reference / baseline data are to determine that: - a data value is the same as one or more reference / baseline values (for example, a TPR tire pressure reading is 32 pounds per square inch (PSI); a manufacturer's name is “Tuftrux”), - the data value is within a range of data values (for example, TPR is between 28 and 34 PSI), - the data value is above and / or below a limit value of a reference value / baseline (for example, TPR is in the range of 31 ± t PSI, where t = the limit value / line base), - the data value starts with, ends with, or contains the reference value / baseline (for example, a vehicle identification number (VIN) starts with a “1” or contains the character string “1GT”), - each of a number of data values is the same, within a range, and / or within a limit of a reference value / baseline (for example, l tire pressure readings for all four tires are all within 28-34 PSI), - computation (s) on the data value (s), possibly including baseline and / or baseline values, are calculated and possibly compared to the reference and / or baseline values (for example, taking an average value from a number of data values, converting English system measurement data values to the metric system unit equivalents and comparing values converted to metric unit reference values, use data values in a formula and compare the result of the formula with reference values), and / or - negations of these conditions (for example, a temperature is not within the range of 43 , 3 to 54.4 ° C (110-130 ° F)). [000107] Many other techniques for determining data values are reliable based on reference and / or baseline data are also possible. [000108] Reference and / or baseline data can include predetermined values (for example, 28 PSI), ranges (for example, 28-34 PSI), limits (for example, a limit of 3 PSI), formulas ( for example, C = 1.8 * F), and / or matching patterns (for example, “1 * 2” as a pattern that matches one of a sequence that starts with a “1” and ends with a “ two"). [000109] Reference and / or baseline data can also be based on data values previously classified as reliable. For example, suppose that three of the devices had respective temperature readings of 98, 99, and 103 degrees, and that all three temperature readings were reliable. Then, the average A of these three values (100 degrees) and / or the R range of these three values (5 degrees) can be used as reference values: for example, a temperature data value can be compared with A, A + R , A - R, A ± R, A ± cR, for a constant value c, c1A ± c2R for constant values c1, c2). Many other bases for using reliable data values as reference and / or baseline data are also possible. [000110] Reference and / or baseline data can be based on a statistical date selection. Statistical selection may involve generating one or more statistics for data to be aggregated into the reference and / or baseline data and then aggregating the data based on the generated statistics. [000111] For example, suppose that data related to the test included a measurement value of Meas1 obtained using a Sens1 sensor. Suppose further that aggregated data related to the measurement value from the Sens1 sensor indicated an average measurement value of MeanMeas1 with a standard deviation of SDMeas1. Then, a number of NSD standard deviations from the mean MeanMeas1 to Meas1 could be determined, possibly using the formula [000112] Then, the measured value Meas1 could be added to the aggregated data related to the Sens1 sensor when the number of NSD standard deviations is less than, or equal to a limit number of NSD_Limite standard deviations. For example, if NSD_Limite = 2, then Meas1 would be added to the aggregated data if Meas1 was within 2 standard deviations from the MeanMeas1 average. [000113] In some modalities, the statistical selection for a set of data values can be performed only if a predetermined number of data values N has been added to the data value set. In these modalities, if the number of aggregated data values is less than N, then data values can be aggregated without statistical selection until at least N data values have been aggregated. In some embodiments, N may be large enough to group data without selection for a considerable period of time (for example, one or more months). Then, after the considerable amount of time, the selection can be carried out, thus allowing the collection of data for the considerable amount of time without focusing on good or failed values. Many other types of statistical selection are also possible. [000114] The data aggregator 420 can classify data related to the test in connection with the rules engine 440. In some embodiments, the rules engine 440 can instruct the data aggregator 420 to use one or more techniques for the classification of one or more more data values in the test-related data. In other embodiments, data aggregator 420 can communicate some or all of the test related data and / or some or all of the baseline values to the 440 rules engine, the 440 rules engine can sort from the test related data and subsequently communicate a classification of the test related data to the data aggregator 420. In some of these other modalities, the data aggregator 420 may perform a preliminary classification for data related to the test; and in a preliminary classification that the test-related data is reliable, communicate some or all of the test-related data and / or some or all of the baseline values to the 440 rules engine for a final determination of reliability. Finally determined reliability data can then be added to the baseline data, as described above. In still other modalities, the data aggregator 420 can determine whether the data related to the test is reliable without communicating with the rules engine 440. [000115] Such data of classified data and / or reference values can be combined or aggregated in the aggregated data by the data aggregator 420. The aggregated data can be updated over time; for example, classified data values can be added or otherwise combined with aggregated data based on a classification of data values. In some modalities, aggregated data may include data values that have not been classified; for example, total data populations, or all data for a specific DUS. The aggregated data can be stored, possibly in a database, and later retrieved and used for classifications and / or for other purposes. [000116] Data analyzer 430 can analyze data related to DUS 102, as described above for data analyzer 370 in the context of Figure 3. [000117] The rules engine 440 can receive data, possibly including claim data, analyze the data received, and generate corresponding DUS reports, as described above for rules engine 340 in the context of Figure 3. [000118] The text analyzer 450 can analyze a word, or otherwise examine the complaint data to find keywords and / or phrases related to the DUS 102 service, as described above for the 360 text analyzer in context of Figure 3. [000119] Data collector 460 can coordinate test activities for one or more tests performed during the DUS 102 test data collection session, as described above for data collector 450 in the context of Figure 3. [000120] Feedback collector 470 is configured to request, receive, store, and / or retrieve “feedback” or entry in reports and / or sub-strategies, as described above for feedback collector 330 in the context of Figure 3 . [000121] Data storage interface 480 is configured to store and / or retrieve data and / or instructions used by server device 106. An example of data storage interface 460 is data storage 216, described above in context of Figure 2. [000122] Figure 5 represents a sample data collection display 500, including DUS identification 510, the global status bar 520, detailed diagnostic status 530, and test status bars 540, 542, 544, and 546 . [000123] The identification of DUS 510 may include data related to a device that specifies a DUS. The global status bar 520 can show visually, numerically, and / or textually the status of a data collection session. As shown in Figure 5, the global status bar 520 graphically, verbatim and numerically shows the percentage completion of the data collection session; in this example, the data collection session is 63% complete. [000124] Detailed diagnostic status 530 can provide additional progress report about the data collection session, such as, but not limited to, communication status (for example, “Established Communication” and “Communication” indicators shown in Figure 5), data entry state (for example, the “Complaint Captured” indicator shown in Figure 5), state of data capture related to the test (for example, the “Code Verification” indicators, ”Monitors ”, And“ Collect Data ”, shown in Figure 5), and analysis status (for example, the“ Data Analysis ”indicator shown in Figure 5). [000125] The test status bars 540, 542, 544, and 546 can provide the status of one or more tests conducted during the data collection session. As shown in Figure 5, the test status bars 540, 542, 544, and 546 graphically, show textually and numerically, each, respectively, the percentage completion of a test; for example, the test status bar 540 in Figure 5 shows that the “Start Test” is 80% complete. [000126] In some modalities, the display of data collection 500 can be improved with the use of instructions and / or audible tones. For example, an audible and / or audible message instruction can be used to inform a vehicle technician to change the state of operation and / or perform another test of a device in service; for example, an audible message or instruction to: "Please increase the acceleration now to operate the vehicle at 2500 RPM". As another example, an audible and / or audible message instruction can be used to inform that the operation is outside the expected ranges; for example, for a 2500 RPM test, an audible and / or audible message instruction can instruct the technician to increase the acceleration when the RPM rate is below the desired 2500 RPM rate. In addition, text that corresponds to such audible instructions can be displayed on the data collection display 500. III. EXAMPLE COMMUNICATIONS [000127] A variety of communications can be performed via network 110. Examples of those communications are illustrated in Figures 6A, 6B, 6C, 6D, 7A, 7B and 7C. The communications shown in Figures 6A, 6B, 6C, 6D, 7A, 7B, and 7C can be in the form of messages, signals, packets, protocol data units (PDUs), frames, fragments and / or any other appropriate type of communication configured to be communicated between devices. [000128] Figure 6A shows an example scenario 600 for processing diagnostic order 610, responsively generating the display of the DUS report 632, and receiving data related to success 640. Scenario 600 begins with diagnostic order 610 being received at client device 104. client device 104 inspects diagnostic order 610 to determine one or more tests related to DUS 102 and responsively generates the DUS 612 test order to perform one or more tests, and communicates the test order from DUS 612 to DSU 102. In some embodiments, data collector 350 from client device 104 generates the DUS 612 test request. In some embodiments, the DUS 612 test request is formatted according to an OBD protocol -II. [000129] Client device 104 also inspects diagnostic order 610 for a complaint (shown in Figure 6A as "C1" with diagnostic order 610). In some embodiments, complaint C1 is not yet inspected on client device 104; whereas, in other modalities, the 360 text analyzer can perform a textual analysis of complaint C1. Complaint C1 can be provided by a user as text and / or as a complaint code, as mentioned above. [000130] Upon receipt of the DUS 612 test request on DUS 102, one or more tests are performed. The data resulting from one or more tests is collected and communicated from DUS 102 to client device 104 as data related to DUS 614. Client device 104 then communicates data related to DUS and complaint C1 to server device 106 using data related to DUS 616. [000131] Figure 6A shows that, in response to data related to DUS 616, the server device generates diagnostic request 620 with a request for one or more additional tests (represented as T1). Details of the generation of diagnostic order 620 are described below with reference to Figure 6B. [000132] Upon receipt of diagnostic order 620, client device 104 communicates the test order of DUS 622 to perform additional tests T1. Upon receipt of the DUS 622 test request on the DUS 102, one or more additional T1 tests are performed. Data from one or more additional tests is collected and communicated from DUS 102 to client device 104 as data related to DUS 624. Client device 104 then communicates data related to DUS and complaint C1 to the server device 106 using data related to DUS 626. In some scenarios not shown in Figure 6A, data related to DUS 626 does not include complaint C1 when C1 has already been communicated to server device 106 (via data related to DUS 616) and so C1 could be stored by server device 106. [000133] Figure 6A shows that, in response to data related to DUS 626, server device 106 generates the DUS 630 report with strategy S1 and communicates the DUS 630 report to the client device 630. In some embodiments , strategy S1 includes one or more sub-strategies SS1, SS2, etc. to address complaint C1. Sub-strategies for addressing complaints are discussed above in more detail with respect to Figure 3 Details of the generation of DUS 630 Report are described below with respect to Figure 6C. [000134] In response to receiving the DUS 630 report, client device 104 generates and communicates the DUS 632 report display. An example DUS report display is shown in Table 1 above. After reporting the DUS 632 report display, scenario 600 continues with client device 104 receiving data related to success 640. Figure 6A shows data related to success 640 with F (SS1), which is F feedback for the sub - strategy SS1 of strategy S1. Feedback on the sub-strategies is discussed in more detail above with respect to Figure 3. In response to data related to success 640, client device 104 communicates corresponding data related to success 642 with F (SS1) to server device 106 . [000135] In some scenarios not shown in Figure 6A, server device 106 can send a DUS report in response to data related to DUS 616 (i.e., server device 106 does not request additional tests / data). In other scenarios not shown in Figure 6A, the server device 106 can send two or more diagnostic requests to request more additional test (s). In other scenarios not shown in Figure 6A, client device 104 can receive and analyze data related to DUS 616 and 626 to generate the DUS 630 report, as described in more detail below with respect to Figures 7A, 7B, and 7C . That is, client device 104 can perform some or all of the functionality described here with respect to a server device 106 in scenarios 600, 650, and / or 680. In still other scenarios not shown in Figure 6A, no data related to success is received in response to the DUS 632 report display (i.e., no feedback on strategy S1 is provided for client device 104 and / or server device 106). [000136] Figure 6B shows an example scenario 650 for processing data related to DUS 616 and responsively generating diagnostic request 620. Data related to DUS with complaint C1 616 is received at the communications interface 410 of server device 106. A Figure 6B shows that the complaint query 662 is generated by the text analyzer 450 in response to complaint C1 660. The complaint query 662 can include keywords / phrases, as determined based on the textual analysis of complaint C1, as described above with respect to Figure 3. [000137] Data related to DUS 670 is communicated from communications interface 410 for both data aggregator 420 and data analyzer 430. Figure 6B shows that complaint C1 is not included with data related to DUS 670; but, in some modalities not shown in Figure 6B, data related to DUS 670 includes C1 (ie, it is a copy of data related to DUS 616). [000138] When receiving data related to DUS 670, data aggregator 420 and / or the rules engine 440 can classify data related to DUS using the techniques described above in the context of Figure 4. As part of the classification, the data aggregator data 420 can query or otherwise access aggregate data 672 to determine baseline data 674 (shown in Figure 6B as “Database 674”) for data related to DUS 670. In the classification of data related to DUS 670 , the 676 rating (shown in Figure 6B as “Class 676”) can be generated by the data aggregator 420 and / or the 440 rules engine. Once generated, the 676 rating can be communicated to the 440 rules engine Additionally, data related to DUS 670 can be stored, possibly according to and / or together with classification 670, by data aggregator 420 in aggregate data 672. [000139] When receiving data related to DUS 670, data analyzer 430 can generate statistical analysis (SA) 678 of data related to DUS 670, possibly based on baseline data 674, using the techniques described above in context of Figures 3 and 4. Data analyzer 430 can communicate statistical analysis 678 to rules engine 440. [000140] Upon receipt of complaint query 662, classification 676, and statistical analysis 672, rules engine 440 can communicate query 666 with complaint data (shown in Figure 6B as "Comp") and SA statistical analysis for diagnostic rules and strategy database 664 (shown in Figure 6B as “Diag Rules / Strat 664”) using the techniques described above in the context of Figures 3 and 4. In response, diagnostic rules and strategy database 664 can communicate strategy 668 (shown in Figure 6B as “S0”) including one or more rules and associated sub-strategies for the 440 rules engine. Using the techniques described above in the context of Figures 3 and 4, the 440 rules engine you can determine which strategy 668 rules (s) to fire, and thus determine the sub-strategy / sub-strategies associated with the triggered rule (s). In scenario 650, rules engine 640 determines which additional data is required, based on the triggered rule (s) and associated sub-strategy / sub-strategies. Rules engine 640 can generate diagnostic request 620 to perform test T1 (s) to obtain additional data and communicate diagnostic request 620 to communications interface 410. Communications interface 410 can then send diagnostic request 620 . [000141] Figure 6C shows an example scenario 680 for processing data related to DUS 626 and generating the DUS 630 report responsively. Data related to DUS with complaint C1 626 is received at the communication interface 410 of the server device 106. In In scenarios not shown in Figure 6C, complaint C1 can be analyzed by a text analyzer to determine a complaint inquiry. However, scenario 680 assumes that C1 has already been analyzed by a text analyzer, as described above with respect to Figures 3 and 6B. [000142] Data related to DUS 682 is communicated from communications interface 410 to data analyzer 430. In scenarios not shown in Figure 6C, data related to DUS 626 is provided for a data aggregator to possibly be combined with data aggregate, as described above in Figures 4 and 6B. Figure 6C shows that complaint C1 is not included with data related to DUS 682; but, in some embodiments not shown in Figure 6C, data related to DUS 682 includes C1 (ie, it is a copy of data related to DUS 626). [000143] When receiving data related to DUS 682, data analyzer 430 can generate statistical analysis 686 (shown in Figure 6C as “SA2”) of data related to DUS 682, using the techniques described above in the context of Figures 3 , 4, and 6B. The data analyzer 430 can report statistical analysis 2,686 to the rules engine 440. [000144] Upon receiving statistical analysis 686, the rules engine 440 can communicate query 688 with the previously determined claim data (shown in Figure 6C as “Comp”) and statistical analysis 686 for the diagnostic rules and database of strategy 664 (shown in Figure 6C as “Diag Rules / Strat 664”) using the techniques described above in the context of Figures 3 and 4. In response, diagnostic rules and strategy database 664 can communicate strategy 690 (shown in Figure 6B as “S1 +”) including one or more rules and associated sub-strategy / sub-strategies for the 440 rules engine. Using the techniques described above in the context of Figures 3 and 4, the 440 rules engine can determine which rules ( s) must be triggered and their associated sub-strategy / sub-strategies. In scenario 680, the rules engine 640 generates the DUS report 630 which can include some or all of the statistical analysis 686 and / or some or all of the sub-strategies of strategy 690 (collectively shown in Figure 6C as “S1”) and communicates the DUS 630 report to the communication interface 410. The communications interface 410 can then send the DUS 630 report. [000145] Figure 6D shows another example scenario 600a for processing diagnostic order 610, responsively generating the display of DUS report 632, and receiving data related to success 640. Scenario 600a is an alternative to scenario 600 where the device server 106 directs testing of DUS 102, in place of client device 104. [000146] Scenario 600a begins with diagnostic order 610 being received at client device 104. Client device 104 transmits diagnostic order 610 as diagnostic order 610a to server device 106. Server device 106 can examine diagnostic order 610a to determine one or more tests related to DUS 102 and responsively generate the test order for DUS 612 to perform one or more tests and report the DUS 612 test order to DSU 102. [000147] Server device 106 can inspect diagnostic request 610 for the complaint (shown in Figure 6C as "C1" with diagnostic request 610a). In some embodiments, complaint C1 is not yet inspected on server device 106; whereas, in other modalities, the text analyzer 450 can perform a textual analysis of complaint C1. Complaint C1 can be provided by a user as text and / or as a complaint code, as mentioned above. [000148] Upon receipt of the DUS 612a test request on DUS 102, one or more tests are performed. Data resulting from one or more tests is collected and communicated from DUS 102 to server device 106 as data related to DUS 614a. [000149] Figure 6D shows that, in response to data related to DUS 614a, server device 106 generates the test request from DUS 622a to perform one or more additional tests (represented as T1). The server device 106 generates the test order for DUS 622a using techniques similar to the techniques described in Figure 6B for generating the diagnostic order 620. [000150] Upon receipt of the DUS 622a test request on DUS 102, one or more additional T1 tests are performed. Data from one or more additional tests is collected and communicated from DUS 102 to server device 106 as data related to DUS 624a. [000151] Figure 6D shows that, in response to data related to DUS 624a, server device 106 generates the DUS 630 report with strategy S1 and communicates the DUS 630 report to the client device 630. Generation details of the DUS 630 Report are described above with respect to Figure 6C. [000152] The remainder of scenario 600a with respect to displaying the DUS 632 report, data related to success 640, and data related to success 642 with F (SS1), is the same as as discussed above for scenario 600 in the context of Figure 6A. [000153] In some scenarios not shown in Figure 6D, server device 106 can send a DUS report in response to data related to DUS 614a (i.e., server device 106 does not request additional tests / data). In other scenarios not shown in Figure 6D, server device 106 can send two or more DUS test requests for additional test request (s). In still other scenarios not shown in Figure 6D, no data related to success is received in response to the display of the DUS 632 report (ie, no feedback on the S1 strategy is provided for client device 104 and / or server device 106). [000154] Figure 7A shows an example scenario 700 for processing diagnostic order 710, responsively generating the display of the DUS report 730, and receiving data related to the success for displaying the DUS report 732 on the client device 104. In some embodiments not shown in Figure 7A, some or all of the techniques and communications involving the client device 104 can be performed by the server device 106. [000155] Client device 104 can determine one or more tests related to DUS 102 based on the diagnostic request received 710 and responsively generate the DUS test request 720 for DSU 102 to perform one or more tests. In some embodiments, the data collector 350 generates the DUS 720 test request. In some embodiments, the DUS 720 test request is formatted according to an OBD-II protocol. [000156] The test (s) in the DUS 720 test order refer to a first operating state (shown as “State1” in Figure 7A) of DUS 102. Example operating states of DUS 102 include an unloaded / lightly loaded operating state (for example, a “idle” operating state), several operating states under normal loads (for example, a “cruising” operating state, an operating state “start-up”), and operating states at, or close to, the maximum load (for example, a “high speed” operating state). Other operating states are also possible. [000157] Client device 104 can inspect diagnostic order 710 for the complaint (shown in Figure 7A as "C2" with diagnostic order 710). Complaint C2 can be provided by a user as text and / or as a complaint code, as mentioned above. In scenario 700, client device 104 (for example, text analyzer 360) can perform a textual analysis of complaint C2. [000158] Upon receipt of the test request for DUS 720 on DUS 102, the one or more tests associated with the first state of operation State1 are performed. Data from one or more tests is collected and communicated to the client device 104 as data related to the DUS 722. [000159] In the scenarios not shown in Figure 7A, one or more additional sequences of DUS test requests and data related to DUS can be communicated between client device 104 and DUS 102; for example, to communicate additional data required while the DUS 102 is operating or in the first state1 operating state or to communicate data while the DUS 102 is operating in another (s) operating state (s) other than the first state1 operating state. [000160] Figure 7A shows that, in response to receiving data related to DUS 722, client device 104 generates and communicates the display of the DUS 730 report related to strategy S2. A display of the DUS report, for example, is shown above in table 1. After reporting the display of the DUS 730 report, scenario 700 continues with client device 104 receiving data related to success 732. Figure 6A shows data related to success 730 with F (SS3, SS4), which is feedback F for sub-strategies SS3 and SS4 of strategy S2. Sub-strategy feedback is discussed above in more detail with respect to Figure 3 and Figure 6A. In other scenarios not shown in Figure 7A, no data related to success is received in response to the display of the DUS 730 report (that is, no feedback on strategy S2 is provided for client device 104). [000161] Figure 7B shows an example scenario 750 for processing diagnostic request 710 and responsively generating the DUS 720 test request. The diagnostic request with claim C2 710 is received at the communications interface 310 of client device 104. Figure 7B shows that the complaint query 762 is generated by the text analyzer 360 in response to complaint C2 760. The complaint query 762 can include keywords / phrases, as determined based on textual analysis of complaint C2, as as described above with respect to Figures 3 and 6B. [000162] Diagnostic request 710 is communicated from communications interface 310 to both rule engine 340 and data collector 350. Figure 6B shows that complaint C2 is included with diagnostic request 710; but, in some modalities not shown in Figure 7B, a diagnostic request without complaint C2 can be provided for the rules engine 340 and / or data collector 350. [000163] Upon receipt of complaint query 762 and diagnostic request 710, rules engine 340 can communicate query 764 with the complaint data (shown in Figure 7B as “Comp2”) for the diagnostic and database rules strategy 770 (shown in Figure 7B as “Diag Rules / Strat 770”) using the techniques described above in the context of Figures 3, 4, and 6B. In response, diagnostic rules and strategy database 770 can communicate differential test request 766 related to the states of operation of DUS 102, shown in Figure 7B as “State1”. The rules engine 640 can generate test request from DUS 720 to run test (s) to obtain data related to a first state of operation State1 and communicate the test request from DUS 720 to the communications interface 310. The communications interface 310 can then send the DUS 720 test request. [000164] Upon receipt of diagnostic order 710, data collector 350 can create or update the profile of DUS 776, using the techniques described above in the context of Figure 3. The profile of DUS 776 can be stored in a database of the DUS profiles, such as the profile data 772 shown in Figure 7B. Profile data 772 can be queried to create, update, and retrieve DUS profiles based on profile-related criteria, as described above in the context of Figure 3. [000165] Figure 7C shows an example scenario 780 for processing data related to DUS 722 and responsively generating the display of the DUS 730 report. Data related to DUS 722 related to the first state of operation State1 of DUS 102 is received at the interface of communications 310. In scenarios not shown in Figure 7C, data related to DUS 732 may include complaint C2, which in turn can be analyzed by a text analyzer (for example, the 360 text analyzer) to determine the query complaint. However, scenario 780 assumes that C2 has already been analyzed by a text analyzer, as described above with respect to Figures 3 and 7B. [000166] When receiving data related to DUS 722, data collector 350 can update the profile of DUS 776, when necessary, to include data related to the first state of operation State1, using the techniques described above in the context of Figure 3. Figure 7C represents the updated DUS 776 profile to store data for the first state of operation State1 (shown in Figure 7C as “State1 Data”). [000167] The data analyzer 370 can generate a differential analysis by comparing data from the DUS 102 while operating in one or more "operating states". Figure 7C shows data analyzer 370 communicating request for State data 786 to profile data 772 to request data related to the first state of operation State1 DUS 102. In response, profile data 772 retrieves data related to the first state of operation State1 from the profile of DUS 776 and communicates the retrieved data to the data analyzer 370 through the state data response1 788. [000168] Data analyzer 370 can compare data related to the first state of operation State1 with aggregate data 796. In some embodiments, aggregate data 796 can be equivalent to aggregate data 672 discussed above in the context of Figures 6B and 6C. In particular, of those modalities not shown in Figure 7C, some or all of the aggregate data 796 is not stored in the client device 104; however, queries for aggregated data are sent via the communications interface 310 for remote processing. [000169] Data analyzer 370 can query aggregate data 796 to determine aggregate data for operating state State1 798 (shown in Figure 7C as "State 1 Agg Data"). Upon receiving data related to the first state of operation State1 and aggregate data 798, data analyzer 370 can generate differential analysis (DA) 790. [000170] Figure 8A represents an example flowchart illustrating functions 800 to generate differential analysis 790. In block 810, the operating state value n is set to 1. In block 820, a grid cell n is determined for the data related to the state of operation n. [000171] Figure 8B shows an example grid 870 with grid cell 872 that corresponds to the first state of operation State1 and a cell of grid 874 that corresponds to the second state of operation State 2. Grid 870 is a two-dimensional grid with revolutions per minute (RPM) on the horizontal axis of the 870 grid and load on the vertical axis of the 870 grid. [000172] In some modalities, load can be determined based on a vacuum reading (for example, collector vacuum for a vehicle acting as DUS 102). Other values or for the horizontal axis and / or the vertical axis are also possible. As such, each grid cell includes a range of revolutions per minute and a load range. Data related to an operating state can be examined to determine revolutions per minute and load data. For a given operating state, the revolutions per minute data can be compared with the ranges of revolutions per minute data of the grid to determine a row of grid for the given operating state and the load data can be compared with ranges of grid load data to determine a grid column for the given operating state. The grid cell for the given operating state is specified by the given grid row / grid column pair. Other techniques for determining a grid cell for data related to an operating state are also possible. [000173] As shown in Figure 8B, grid cells 872 and 874 can indicate that the state of operation State1 is a “idle” or similar state without load / low load and the state of operation State2 is a “cruising speed ”Or similar normal charge state in operation. Many other examples are also possible, including, but not limited to, grids with fewer or more grid cells and / or non-square grids. [000174] By determining which grid cell n is related to the operating state n, the data can be verified as related to (or unrelated to) a specific operating state. For example, suppose that the D1 data is received as being related to a “idle” operating state and that G1 is a grid cell determined for D1. By determining that G1 is a grid cell related to the “idle” operating state, D1 can be verified as being taken from the specific “idle” operating state. [000175] As another example, let D2 be given from a test requested from a “cruise” operating state, let G2 be a grid cell determined for D2 using the techniques mentioned above, and suppose that G2 does not it relates to the “cruising speed” operating state (for example, G2 refers, however, to the idling operating state). Since D2 is not related to a grid cell for the specific “cruising speed” operating state, D1 is not checked to be in the specific “cruising speed” operating state. [000176] If it is verified that data is not in a given state of operation, an order can be generated to re-run a test in the appropriate state of operation. Continuing with the example above, since D2 has not been verified to be from the “cruising speed” operating state, another test may be requested to generate data from the “cruising speed” operating state. [000177] In some cases, the data can be verified by techniques other than using the grid cell. For example, a vehicle may be known, possibly by means of direct observation and / or by means of data not used to assign the grid cells, to be operating in a certain operating state. For example, a driver of a vehicle operating in a “cruise” operating state could claim that “I know that I was driving consistently between 48 and 56 km / h (30 and 35 MPH) throughout the test”. At this point, the data can be verified as being from the given “cruising speed” operating state. Thus, the erroneous data used to assign data to the grid cells and subsequent operating states that failed to indicate that the vehicle was in the “cruising speed” operating state is indicative of a problem. Consequent repair strategies to correct the causes for the erroneous data can be used to address the problem. [000178] Returning to Figure 8A, in block 830, aggregate data n is determined based on grid cell n. For example, data analyzer 370 can query aggregate data 796 to retrieve data related to DUS and for data within a grid cell (that is, data taken with ranges of revolutions per minute and load data for the grid n). Alternatively, data analyzer 370 can query aggregate data 796 to retrieve DUS-related data and filter the retrieved data for data within grid cell n, thereby determining aggregate data n. Other techniques for determining aggregated data n are also possible. [000179] In block 840, a differential analysis list (DA List) n is generated based on the comparison of data related to the state of operation n and aggregated data n. The differential analysis list can be generated based on data related to the state of operation n and aggregate data n that differs. Example techniques for determining differences between a data value related to the state of operation n and an aggregate data value n include determining that: a data value is not the same as the aggregate data value, the data value is not within a range of data values for aggregated data, the data value is either above or below a threshold value for the aggregated data value (s), the data value does not match one or more aggregated data values, each of a number of data values is not the same, within a range, and / or within the limit of an aggregate data value, computation (s) on the data value (s), possibly including reference values, is / are compared with the reference values, and / or negations of these conditions. [000180] A statistical analysis of the data related to the operating state n and / or the aggregate data n can be used to generate the differential analysis list n. For example, the statistical selection techniques discussed above in the context of figure 3 can be applied to data related to state of operation n and / or aggregate data n. Statistical selection may involve generating one or more statistics for aggregated data n and then comparing the data related to the state of operation n based on the generated statistics. [000181] For example, suppose that data related to the state of operation did not include a measurement value of Mn obtained using a Sens1 sensor. Suppose further that aggregated data from the Sens1 sensor indicated an average measurement value of AggMeanMn with a standard deviation of AggSDMn. Then, a number of NSDMn standard deviations from the average AggMeanMn for Meas1 could be determined, possibly using the formula [000182] Then, the measured value Meas1 could be evaluated based on the number of standard deviations NSDMn and one or more limit values. For example, suppose that the assessments in Table 3 below were used to assess the number of NSDMn standard deviations: TABLE 3 [000183] If NSDMn is between a lower limit of 0 and an upper limit of 1,999, then the Mn measurement can be assessed as acceptable. Similarly, if NSDMn is between a lower limit of 2 and an upper limit of 3,999, then the Mn measurement can be assessed as marginal. If NSDMn is greater than 4, then the Mn measurement can be assessed as a failure. [000184] The techniques described above for example Mn measurement and more advanced statistical analysis techniques including variances, correlations and / or principle component analyzes can be applied to multiple variables (for example, Mn measurement and other Mn1 measurements, Mn2.) To perform an “analysis of multiple variables” of the related data of the operating state of the aggregated data. Furthermore, relationships between two or more variables of the data related to the state of operation n and to the aggregate data can be examined during the analysis of multiple variables. [000185] One or more variables of the data, or contributors of principle, can be chosen, which are (a) related to the state of operation ne / or to the aggregated data and (b) separate those related to the state of operation ne / or to aggregated data in different categories. Principle contributors can be determined through operations on the aggregate database using techniques to identify a small set of principle-based vectors and most likely failure vectors. The techniques include, but are not limited to, singular value decomposition (SVD), correlations and / or variances. The projection of these vectors into the space of parameters and real variables of the vehicle originates the Diagnostic Strategies and prognoses for a vehicle. [000186] As a simplified example, suppose that if both the Mn and Mn1 measurements were faults, then a Fault of the Sens1 sensor is implied, but Sens1 is not implied when Mn is faulty and Mn1 is not faulty. Taking this example for higher dimensions, consider a situation where there is a large LN number of monitored parameters (for example, LN = 30) for a vehicle. Patterns may be present within small existing data sets, which will imply a failure, and which the different patterns exhibited within these LN parameters will indicate different vehicle failures. Through a unique analysis of value decomposition, and projections for the real vector space, subsets of the parameters can be identified for consideration as vectors of basic principle. The subsets of the parameters can be grouped and monitored for different faults. When the subset of parameters displays certain conditions that can be monitored using rules in the rules engine, an appropriate repair strategy can be determined. [000187] Multiple variable correlation analysis can be used to compare data related to state of operation n and aggregated data n. For example, suppose that a VAD vector includes an SN number of aggregated data sensor values related to a particular claim, including values for one or more principle components, and also assume that the data related to the state of operation n includes a vector SN VNAD non-aggregated data sensor values from a device in service with the particular claim, also including values for one or more principle components. [000188] Then, a correlation analysis can be performed between the data in the VAD and VNAD vectors. For example, a “pattern correlation” or Pearson product-moment correlation coefficient can be calculated between VAD and VNAD. Pearson's product-moment correlation coefficient p for the vectors VAD and P _ Cov (VAD, VNAD) VNAD can be determined as where -1 <p <+1, cov (X, Y) is the covariance of X and Y, and ó (X) is the standard deviation of X. p indicates the linear dependency correlation aka between the two vectors, with p = +1 when a linear relationship between the vectors exists, p = -1 when the VAD and VNAD values are situated on a line so that VAD increases when VNAD decreases, and p = 0 when there is no linear correlation between VAD and VNAD. [000189] Thus, when p is approximately or equal to 1, there may be an approximately or current linear relationship between the aggregated VAD data and the corresponding input data from the vehicle under VNAD test. However, if p is greater than a predetermined threshold amount less than 1 (for example, for a predetermined threshold amount of 0.15, then p <= 0.85), an inference can be made that it probably will not there is a linear correlation between VAD and VNAD. Based on this inference, the data in VAD and VNAD can be considered unrelated. When VAD and VNAD are determined to be unrelated, since the data in VAD is aggregated or valid / baseline data, another inference can be drawn, that VNAD has invalid data. Thus, when p is greater than the predetermined threshold amount less than 1, one or more repair strategies can be suggested to obtain the VNAD data, including values of the principle components, closest to the VAD data. [000190] Another simplified example of analysis of multiple variables may involve the generation of a space of n-dimensions from a data set, such as aggregated data, baseline data, and / or reference data, for a or more devices under test. Each dimension in the n-dimension space can represent a value of interest for a device under test, such as, but not limited to, device-related data values, sensor values given from the device under test, data values baseline and / or baseline data, and statistics based on those values. [000191] A base of n vectors for the space of n-dimensions can be determined; that is, each Vbasis vector (i), n> i> 1, from the base is linearly independent from the other n-1 Vbasis vectors at the base. In some embodiments, the rule engine 340 and / or the rule engine 440 may use the n-dimension space. For example, rules engine 340 and / or rules engine 440 can receive n-dimensional input vector (s) that correspond to one or more measurements taken from one or more tests and perform vector operations and / or other operations to compare the input n-dimensional vector with a baseline data vectors of one or more dimensions, which share a base with the input n-dimensional vector. In particular, test data can be mapped into the n-dimensional vector space as a n-dimensional vector and the rules engine 340 and / or the rules engine 440 can process the n-dimensional vector (s) ( is) of baseline data and / or the n-dimensional input vector (s). [000192] For example, in tire diagnostics, a n = 4 dimensional space, for example, for tires of a given manufacturer / model pair may include: tire pressure tp (in PSI), tire mileage tm (in miles) ), tire temperature tt (in degrees Fahrenheit), and age of tire ta (in years). So, for this example, a base set of Vbasis vectors for this space can be: Continuing with this example, a 4-dimensional vector using the sample base set of vectors for the test result indicating a 3 year old tire has a pressure of 30 pounds / square inch for a 3 year old tire is: [30 0 0 3] T. [000193] Dimensions in the vector space cannot be classified. For example, some dimensions can be classified as "static" dimensions, while others can be classified as "dynamic" dimensions. A static dimension is a dimension that cannot be easily changed during a DUS repair session, if at all. For example, the tire age dimension, when expressed in years, cannot be easily changed during a one-day repair session. In contrast, dynamic dimensions can easily be changed during a repair session. For example, the tire pressure dimension tp can be changed by a technician with access to an air hose, to add air and thus increase tp, and a screwdriver to release air and thus decrease tp. As such, measurements related to static dimensions can be used to classify one or more components of a DUS, and measurements related to dynamic dimensions can be adjusted during maintenance and / or repair procedures. [000194] Once when a set of values of interest and corresponding base vectors have been determined for the n-dimensional vector space, then baseline values can be determined in the n-dimensional space, and adjustments on the device under test, that match the values of interest, can be performed to align test-related data with the baseline values. Continuing with the 4-dimensional tire values, suppose that the baseline data for a 3-year tire at 70 degrees Fahrenheit is: [28 tm 70 3] T, where tm is in the range of 10000 * ta and 20000 * ta ; that is, between 40,000 and 80,000 miles, and that an example input vector of the test-related data is: [37 50000 70 3] T. Taking a difference between the baseline data vector and the input vector leads to an example difference vector of [-9 tm 0 0] T; thus, to obtain the tire with the baseline data, the tire pressure must be reduced by 9 PSI, as can be seen in the value -9 for the entry of tp in the example difference vector. The rule engine 340 and / or the rule engine 440 can then trigger a rule to provide a strategy or sub-strategy for lowering tire pressure. For example, a strategy or sub-strategy could be that the tire pressure can be lowered by pressing a screwdriver over a pin on a valve stem, allowing air to escape, and thus lowering the air pressure. [000195] In some embodiments, rules engine 340 and / or rules engine 440 may determine that the tp dimension is a dynamic dimension and thus a sub-strategy can be used to adjust the value of the tp value for the DUT . Based on this determination, the rules engine 340 and / or the rules engine 440 can identify the aforementioned sub-strategy for lowering tire pressure. [000196] Other limits, evaluations, evaluation schemes, analyzes of single and multiple variables, vectors, bases, differences in data, and / or techniques of the same are also possible. [000197] In block 850, a comparison is made between the operating state value n and a maximum number of operating states (“MaxOS” in Figure 8B). For the example shown in Figures 7A-7C, the maximum number of operating states is two. If the operating state value n is greater than, or equal to, the maximum number of operating states, functions 800 remain in block 860; otherwise, functions 800 remain in block 852. [000198] In block 852, the operating status value n is incremented by 1 and functions 800 continue in block 820. [000199] In block 860, differential analysis 790 is determined by combinations of differential analysis lists n, where n varies from 1 to the maximum number of operating states. The differential analysis lists can be combined by concatenating all the lists, by making a union of the differential analysis lists, by selecting some, but not all of the data from each differential analysis list, or by selecting some , but not all differential analysis lists, and / or by filtering each list for common differences. As an example of filtering for common differences, suppose that n = 2 with differential analysis list 1 including differences for data from three sensors: S1, S3, and S6, and differential analysis list 2 including differences for data from four sensors: S2, S3, S6, and S8. So, the common differences for both sample lists would be the data from the S3 and S6 sensors. Other techniques for combining the differential analysis lists are also possible. [000200] In some scenarios, the maximum number of operating states can be equal to one. In these scenarios, the differential analysis would involve comparing block 840 between data related to state of operation 1 and aggregate data for state of operation 1 and the combination operation of block 860 could return the list of differential analysis to state of operation 1 As such, the differential analysis for only one state of operation involves a comparison of data related to the state of operation and data aggregated to the state of operation. [000201] Thus, functions 800 can be used to generate a differential analysis by comparing data related to one or more operating states and aggregated data for those one or more operating states using a grid of operating states. [000202] Returning to Figure 7C, data analyzer 370 can communicate differential analysis 790 to rules engine 340. Upon receipt of differential analysis 790, rules engine 340 can communicate query 792 with previously determined complaint data ( shown in Figure 7C as “Comp2”) and differential analysis 790 for the diagnostic rules and strategy database 770 (shown in Figure 7C as “Diag Rules / Strat 770”) using the techniques described above in the context of Figures 3 and 4. [000203] In response, diagnostic rules and strategy database 770 can communicate strategy 794 (shown in Figure 6B as “S2 + '”) including one or more associated rules and sub-strategies for the rule engine 340. Using the techniques described above in the context of Figures 3, 4, and 6C, the rules engine 340 can determine which rule (s) to fire and their associated sub-strategy / sub-strategies. In scenario 780, the rules engine 740 generates the display of the DUS report 740 which can include some or all of the differential analysis 790 and / or some or all of the sub-strategies of strategy 794 (collectively shown in Figure 7C as “S2” ) and communicates the display of the DUS 740 report to the communication interface 310. The communications interface 310 can then send the display of the DUS 740 report. IV. EXAMPLE OPERATION [000204] Figure 9 represents an example flowchart that illustrates the 900 functions that can be performed according to an example modality. For example, functions 900 can be performed by one or more devices, such as server device 106 and / or client device 104, described in detail above in the context of Figures 1-7C. [000205] Block 910 includes receiving DUS-related data for a device in service. For example, DUS-related data could be received in a DUS-related data communication, as described above in detail with respect to at least Figures 6A-7C. In some embodiments, the DUS-related data includes the DUS test data obtained from a DUS test performed at the DUS. [000206] Block 920 includes determining that the data related to DUS should be added to the aggregated data. In some modalities, the determination to aggregate data related to DUS may be based on a classification of data related to DUS. The determination to aggregate DUS-related data is described above in detail with respect to at least Figures 4, 6A, 6B, and 6C. [000207] In some modalities, determining what data related to DUS includes: determining one or more attributes of DUS from data related to DUS, selecting baseline data from aggregated data based on one or more attributes of DUS, generate a baseline comparison between the DUS test data and baseline data, determine the classification for the DUS-related data based on the baseline comparison and aggregate the DUS-related data to the aggregated data based on rating. [000208] Block 930 includes generating a comparison of aggregated data from the data related to the DUS and the aggregated data. Comparisons of DUS-related data and aggregate data are described above in detail with respect to at least Figures 3, 4, and 6A-8B. [000209] In some embodiments, the generation of the aggregated data comparison of the DUS-related data and the aggregated data includes: (i) determining a base of one or more vectors representing at least part of the aggregate data, (ii) determining a vector baseline data of the baseline data, the baseline data vector using the base, (iii) determining a DUS data vector of the DUS-related data, the DUS data vector using the base , and (iv) determine a vector difference between the baseline data vector and the DUS data vectors. The generation of an aggregated data comparison using a vector base, uses for bases, and uses for vector differences, are discussed above in more detail at least in the context of Figure 8A. [000210] In some modalities, the generation of the aggregated data comparison of the DUS-related data and the aggregated data includes generating a pattern correlation between at least some of the DUS-related data and at least some of the aggregated data. Pattern correlations are discussed above in more detail at least in the context of Figure 8A. [000211] Block 940 includes generating a DUS report based on the comparison of aggregated data. In some embodiments, the DUS report may include one or more sub-strategies; and in particular of these modalities, at least one of the one or more sub-strategies may include a sub-strategy success estimate. DUS reports, sub-strategies, and sub-strategy success estimates are described above in detail with respect to at least Figures 3, 4, and 6A-8B. [000212] In other modalities, the data related to the DUS includes complaint data; in these modalities, the generation of the DUS report includes generating the DUS report based on the complaint data. In particular, of these modalities, generation of the DUS report includes: determining at least one complaint based on the complaint data, generating a query based on at least one complaint, consulting a device rules engine using the query; and in response to the consultation, receive one or more sub-strategies. In some of the particular modalities, the complaint data includes the complaint text; in these modalities, determining at least one complaint includes: generating a textual analysis of the complaint text and determining at least one complaint based on the textual analysis. In other particular modalities, DUS-related data includes DUS test data obtained from a first test performed at DUS; in these modalities, generating the comparison of aggregated data includes performing the statistical analysis of the DUS test data and the aggregated data, and generating the DUS report includes generating the query based on the statistical analysis and at least one complaint. In some of the other particular modalities, the data related to the DUS and the aggregate data, each, comprise data for a plurality of variables, and in which the performance of the statistical analysis comprises performing an analysis of multiple variables of the data for at least two variables of the plurality of variables. [000213] In still other modalities, generating the DUS report based on the comparison of aggregated data may include determining at least one of the one or more sub-strategies based on a difference in vectors. The use of vector differences to determine sub-strategies is discussed above in more detail at least in the context of Figure 8A. [000214] In other modalities, the DUS-related data includes claim data, and the generation of the aggregated data comparison of the DUS-related data and the aggregated data includes: (i) determining a small set of aggregated data based on in the claim data, (ii) determine a set of base vectors based on the small data set, and (iii) identify one or more principle parameter components for a claim in the claim data based on a projection of the claim vectors basis for over DUS-related data. In these modalities, generating the DUS report based on the comparison of aggregate data includes: (iv) applying one or more rules on the principle parameter components, and (v) determining a sub-strategy based on the one or more rules applied . The use of principle parameter components is discussed above in more detail at least in the context of Figure 8A. [000215] Block 950 includes sending the DUS report. The sending of the DUS report is described in more detail with respect to at least Figures 3, 4, 6A, 6C, 7A, and 7C. [000216] In some modalities, functions 900 may also include the generation of a diagnostic request based on the comparison of aggregated data on the device, where the diagnostic request for requesting data related to a second DUS test performed on the DUS. In particular of these modalities, the diagnostic order includes instructions for performing the second DUS test. [000217] In still other modalities, functions 900 can also include receiving data related to success in a first sub-strategy from one or more sub-strategies in the device and adjusting the sub-strategy success estimate of at least the first sub -strategy based on data related to success on the device. [000218] Figure 10 represents an example flowchart that illustrates the functions 1000 that can be performed according to an example modality. For example, functions 1000 can be performed by one or more devices, such as server device 106 and / or client device 104 described in detail above in the context of Figures 1-7C. [000219] Block 1010 includes receiving a diagnostic request for a DUS. Diagnostic orders for devices in service are described above in detail with respect to at least Figures 3-7C. [000220] Block 1020 includes sending a DUS test request to perform a test related to the diagnostic order. DUS test orders and DUS tests are described above in detail with respect to at least Figures 3, 4, and 6A-7C. [000221] Block 1030 includes receiving data related to the DUS based on the test. Receiving data related to DUS is described above in detail with respect to at least Figures 3, 4, and 6A-7C. [000222] Block 1040 includes sending data related to DUS. Sending data related to DUS is described above in detail with respect to at least Figures 3, 4, and 6A-7C. In some modalities, data related to DUS is sent through a communication network interface. [000223] Block 1050 includes receiving a DUS report based on no data related to DUS. DUS reports are described above in detail with respect to at least Figures 3, 4, and 6A-7C. In some modalities, the DUS report is received through a communication network interface. [000224] Block 1060 includes generating a DUS report display of the DUS report. Generating the display of the DUS report is described in more detail with respect to at least Figures 3, 6A, 7A, and 7C. In some embodiments, the display of the DUS report is displayed through a user interface. [000225] Figure 11 represents an example flowchart that illustrates the 1100 functions that can be performed according to an example modality. For example, functions 1100 can be performed by one or more devices, such as server device 106 and / or client device 104 described in detail above in the context of Figures 1-7C. [000226] Block 1110 includes receiving a diagnostic request to diagnose a DUS. Diagnostic orders for devices in service are described above in detail with respect to at least Figures 3-7C. [000227] Block 1120 includes determining a test based on the diagnostic request. The test can be related to a first state of operation of the DUS. Operating states of devices in service and tests related to those operating states are discussed above in detail with respect to at least Figures 3, 4, and 7A-7C. In some embodiments, the test includes a plurality of tests for the DUS. [000228] Block 1130 includes request for test performance in the first state of DUS operation. Operating states of devices in service and testing in those operating states are discussed above in detail with respect to at least Figures 3, 4, and 7A-7C. [000229] Block 1140 includes receiving data from the first state of operation for the DUS based on the test. Operating states of the devices in service and data from the tests in those operating states are discussed above in detail with respect to at least Figures 3, 4, and 7A-7C. In some embodiments, the data for the first operating state includes data from at least two sensors associated with the DUS. [000230] Block 1150 includes verifying that the data of the first state of operation is or is not related to the first state of operation. In some embodiments, verifying that the state of the first operation is related to the first state of operation includes: determining a first grid cell for the first operating state data, determining an operating state related to the first grid cell, and determining that the operating state related to the first grid cell is the first operating state. In some of these modalities, verifying that the state of the first operation is not related to the first state of operation includes: determining a first grid cell for the data of the first operating state, determining an operating state related to the first grid cell, and determine that the operating state related to the first grid cell is not the first operating state. Checking which data is or is not related to an operating state is discussed above in more detail with respect to at least Figure 8. [000231] In block 1160, a decision is made as to whether the data from the first operating state is related to the first operating state. If the data from the first operating state is related to the first operating state, the control passes to block 1170. However, if the data from the first operating state is not related to the first operating state, control passes to block 1130. . [000232] Block 1170 includes generating a differential analysis of the data of the first state of operation. Differential analysis of data from devices in service is discussed above in detail with respect to at least Figures 3, 4, and 7A-8B. In some embodiments, generating the differential analysis includes: determining first aggregate data for a first grid cell, and generating a first differential analysis list for the first operating state based on a comparison of the data of the first operating state and the first data aggregate. [000233] Block 1180 includes generating a DUS report view. The display of the DUS report can be based on the differential analysis. Generating the DUS report views is discussed above in detail with respect to at least Figures 3, 4, and 6A-7C. [000234] Block 1190 includes sending the DUS report display. Sending the views of the DUS report is discussed above in detail with respect to at least Figures 3, 4, and 6A-7C. V. CONCLUSION [000235] Example modalities of the present invention have been described above. Those skilled in the art will understand that changes and modifications can be made in the modalities described without departing from the true scope and spirit of the present invention, which is defined by the claims.
权利要求:
Claims (14) [0001] 1. Method, characterized by the fact that it comprises: on a device, receiving data related to device in service (DUS) for a DUS based on a first test performed on the DUS, the data related to the DUS comprising complaint data related to a claim that is due to a first component of the DUS, and test data for at least the first test; on the device, determine one or more classifications for the test data based on the claim data, where one or more classifications comprise claim-oriented classifications associated with DUS components, and where determining one or more classifications comprises: classifying the data test with a first complaint-oriented classification associated with the first DUS component; determine whether data transmitted as part of the test data is unrelated to the complaint; and after determining that the data transmitted as part of the test data is unrelated to the claim, reclassify the data transmitted as part of the test data with a second complaint-oriented classification that is associated with a second component of the DUS, where the second complaint-oriented classification differs from the first complaint-oriented classification; on the device, determine that the DUS-related data should be aggregated into aggregated data based on one or more classifications for the test data on the device, generate a first comparison of aggregated data from the DUS-related data and the aggregated data; on the device, generate a diagnostic request based on the comparison of aggregated data, the diagnostic request to request data related to a second DUS test performed at DUS; on the device, receive DUS-related data based on the second DUS test; on the device, generate a DUS report based on a second comparison of aggregated data from DUS-related data based on the second DUS test and aggregate data, the DUS report comprising one or more sub-strategies, in which at least one one or more sub-strategies comprises an estimate of sub-strategy success; and send the DUS report from the device. [0002] 2. Method, according to claim 1, characterized by the fact that determining that the data related to DUS must be aggregated comprises: determining one or more attributes of DUS from the data related to DUS; select baseline data from aggregate data based on one or more DUS attributes; generate a baseline comparison between the DUS test data and the baseline data; determine the classification for the DUS-related data based on the baseline comparison; and aggregate the DUS-related data into the aggregate data based on the classification. [0003] 3. Method, according to claim 1, characterized by the fact that the diagnostic order comprises instructions for performing the second DUS test. [0004] 4. Method, according to claim 1, characterized by the fact that generating the DUS report comprises generating the DUS report based on the complaint data. [0005] 5. Method, according to claim 4, characterized by the fact that generating the DUS report comprises: determining the complaint based on the complaint data; generate a query based on the complaint; query a device rules engine using the query; and in response to the consultation, receive one or more sub-strategies. [0006] 6. Method, according to claim 5, characterized by the fact that the complaint data comprises the complaint text, and in which determining the complaint comprises: generating a textual analysis of the complaint text; and determine at least one complaint based on textual analysis. [0007] 7. Method, according to claim 5, characterized by the fact that the data related to DUS comprises test data from DUS obtained from the first test performed in DUS, in which generating the first comparison of aggregated data comprises performing an analysis statistics of the DUS test data and the aggregated data, and where generating the DUS report comprises generating the query based on the statistical analysis and at least one complaint. [0008] 8. Method, according to claim 7, characterized by the fact that the data related to the DUS and the aggregated data, each, comprise data for a plurality of variables, and in which the performance of the statistical analysis comprises performing an analysis of multiple data variables for at least two variables of the plurality of variables. [0009] 9. Method, according to claim 1, characterized by the fact that it also comprises: in the device, receiving data related to success in a first sub-strategy of one or more sub-strategies; and on the device, adjust the sub-strategy success estimate of at least the first sub-strategy based on the data related to success. [0010] 10. Method, according to claim 1, characterized by the fact that generating the comparison of aggregated data of the data related to the DUS and the aggregated data comprises: determining a base of one or more vectors representing at least part of the aggregated data; determining a baseline data vector of the baseline data, the baseline data vector using the base; determining a DUS data vector of the DUS-related data, the DUS data vector using the base; determining a vector difference between the baseline data vector and the DUS data vectors; and optionally, where generating the DUS report based on the comparison of aggregated data comprises determining at least one of the one or more sub-strategies based on the difference in vectors. [0011] 11. Method, according to claim 1, characterized by the fact that the data related to DUS comprises complaint data, and in which to generate the comparison of aggregated data of the data related to DUS and of the aggregated data comprises: determining a reduced set aggregated data data based on the complaint data; determine a set of base vectors based on the small data set; and identifying one or more principle parameter components for a claim in the claim data based on a projection of the base vectors for DUS-related data; and where generating the DUS report based on the comparison of aggregate data comprises: applying one or more rules about the principle parameter components; and determine a sub-strategy based on the one or more rules applied. [0012] 12. Method, according to claim 1, characterized by the fact that generating the comparison of aggregated data of data related to DUS and of aggregated data comprises generating a pattern correlation between at least some of the data related to DUS and at least some of the aggregate data. [0013] 13. Client device, characterized by the fact that it comprises: memory; a processor; and instructions stored in memory which, in response to execution by the processor, cause the client device to perform the method as defined in any of claims 1 to 12. [0014] 14. Client device, according to claim 13, characterized by the fact that the client device further comprises a communication network interface, in which sending data related to DUS comprises sending data related to DUS through the interface of communication network
类似技术:
公开号 | 公开日 | 专利标题 BR112013020413B1|2021-02-09|method and client device that performs such method CN106104636B|2020-01-21|Automobile detection system using network-based computing infrastructure US8930064B2|2015-01-06|Method and system for automated and manual data capture configuration JP5298270B2|2013-09-25|Diagnostic system, diagnostic method, and vehicle diagnostic system JP4928532B2|2012-05-09|Vehicle fault diagnosis device US8918245B2|2014-12-23|Methods and systems for providing open access to vehicle data WO2019205858A1|2019-10-31|Vehicle diagnosis method, apparatus, and device US9704141B2|2017-07-11|Post-repair data comparison US10769870B2|2020-09-08|Method and system for displaying PIDs based on a PID filter list US20130205014A1|2013-08-08|Computer system and rule generation method US20200302711A1|2020-09-24|Method and system for providing diagnostic filter lists US20190130668A1|2019-05-02|System and method for generating augmented checklist RU2684225C2|2019-04-04|Aircraft engine monitoring system validation instrument Kannan et al.2018|Nominal features-based class specific learning model for fault diagnosis in industrial applications US20190120159A1|2019-04-25|Sensor diagnostic procedure JP4899633B2|2012-03-21|Communication performance analysis program, communication performance analysis device, and communication performance analysis method CN111753867A|2020-10-09|Monitoring and diagnosing vehicle system problems using machine learning classifiers JP2010128673A|2010-06-10|Computer network, device, and method of detecting and specifying abnormality, and program thereof CN107450507B|2021-03-09|Information processing intermediate system and method CN113375953A|2021-09-10|Mobile equipment diagnosis device and equipment diagnosis information display method US20210365309A1|2021-11-25|Method and System of Performing Diagnostic Flowchart
同族专利:
公开号 | 公开日 CN103477366A|2013-12-25| US11048604B2|2021-06-29| EP2678832B1|2019-04-10| US20210279155A1|2021-09-09| EP2678832A2|2014-01-01| CN103477366B|2017-03-08| US20140244213A1|2014-08-28| WO2012115899A3|2013-05-02| BR112013020413A2|2017-08-08| CA2827893A1|2012-08-30| WO2012115899A2|2012-08-30| US20120215491A1|2012-08-23|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JPS6225234A|1985-07-26|1987-02-03|Nissan Motor Co Ltd|Fault diagnosing device for vehicle| JPH05280395A|1992-03-30|1993-10-26|Fuji Heavy Ind Ltd|Abnormality detection method in air-fuel ratio control system| US5520160A|1993-08-26|1996-05-28|Nippondenso Co., Ltd.|Fuel evaporative gas and air-fuel ratio control system| US5574828A|1994-04-28|1996-11-12|Tmrc|Expert system for generating guideline-based information tools| US6301531B1|1999-08-23|2001-10-09|General Electric Company|Vehicle maintenance management system and method| US6513025B1|1999-12-09|2003-01-28|Teradyne, Inc.|Multistage machine learning process| US20020007237A1|2000-06-14|2002-01-17|Phung Tam A.|Method and system for the diagnosis of vehicles| US20020007289A1|2000-07-11|2002-01-17|Malin Mark Elliott|Method and apparatus for processing automobile repair data and statistics| US20020016655A1|2000-08-01|2002-02-07|Joao Raymond Anthony|Apparatus and method for processing and/or for providing vehicle information and/or vehicle maintenance information| US6687596B2|2001-08-31|2004-02-03|General Electric Company|Diagnostic method and system for turbine engines| US6760659B1|2002-11-26|2004-07-06|Controls, Inc.|Device and method for engine control| US6850071B1|2003-08-28|2005-02-01|Automotive Test Solutions, Inc.|Spark monitor and kill circuit| CA2447963A1|2003-10-31|2005-04-30|Ibm Canada Limited - Ibm Canada Limitee|System and method for life sciences discovery, design and development| JP4509602B2|2004-02-27|2010-07-21|富士重工業株式会社|Operator side system and mode file identification method| US7509538B2|2004-04-21|2009-03-24|Microsoft Corporation|Systems and methods for automated classification and analysis of large volumes of test result data| US6955097B1|2004-05-11|2005-10-18|Bei Sensors & Systems Company, Inc.|Radial movement capacitive torque sensor| JP4032045B2|2004-08-13|2008-01-16|新キャタピラー三菱株式会社|DATA PROCESSING METHOD, DATA PROCESSING DEVICE, DIAGNOSIS METHOD, AND DIAGNOSIS DEVICE| US20060095230A1|2004-11-02|2006-05-04|Jeff Grier|Method and system for enhancing machine diagnostics aids using statistical feedback| US8412401B2|2004-12-30|2013-04-02|Service Solutions U.S. Llc|Method and system for retrieving diagnostic information from a vehicle| US7444216B2|2005-01-14|2008-10-28|Mobile Productivity, Inc.|User interface for display of task specific information| KR100764399B1|2005-08-23|2007-10-05|주식회사 현대오토넷|Vehicle management system in telematics system and method thereof| US20070156311A1|2005-12-29|2007-07-05|Elcock Albert F|Communication of automotive diagnostic data| US7739007B2|2006-03-29|2010-06-15|Snap-On Incorporated|Vehicle diagnostic method and system with intelligent data collection| US7765040B2|2006-06-14|2010-07-27|Spx Corporation|Reverse failure analysis method and apparatus for diagnostic testing| US7751955B2|2006-06-30|2010-07-06|Spx Corporation|Diagnostics data collection and analysis method and apparatus to diagnose vehicle component failures| US7801671B1|2006-09-05|2010-09-21|Pederson Neal R|Methods and apparatus for detecting misfires| JP2008121534A|2006-11-10|2008-05-29|Denso Corp|Abnormality diagnostic device of internal combustion engine| US7487035B2|2006-11-15|2009-02-03|Denso Corporation|Cylinder abnormality diagnosis unit of internal combustion engine and controller of internal combustion engine| US7529974B2|2006-11-30|2009-05-05|Microsoft Corporation|Grouping failures to infer common causes| US7945438B2|2007-04-02|2011-05-17|International Business Machines Corporation|Automated glossary creation| US8441484B2|2007-09-04|2013-05-14|Cisco Technology, Inc.|Network trouble-tickets displayed as dynamic multi-dimensional graph| JP4826609B2|2008-08-29|2011-11-30|トヨタ自動車株式会社|Vehicle abnormality analysis system and vehicle abnormality analysis method| US8315760B2|2008-12-03|2012-11-20|Mitchell Repair Information Company LLC|Method and system for retrieving diagnostic information| US8095261B2|2009-03-05|2012-01-10|GM Global Technology Operations LLC|Aggregated information fusion for enhanced diagnostics, prognostics and maintenance practices of vehicles| US20120136802A1|2010-11-30|2012-05-31|Zonar Systems, Inc.|System and method for vehicle maintenance including remote diagnosis and reverse auction for identified repairs| US20120215491A1|2011-02-21|2012-08-23|Snap-On Incorporated|Diagnostic Baselining|US20120215491A1|2011-02-21|2012-08-23|Snap-On Incorporated|Diagnostic Baselining| US9299201B2|2011-10-06|2016-03-29|GM Global Technology Operations LLC|Acquisition of in-vehicle sensor data and rendering of aggregate average performance indicators| US20130198217A1|2012-01-27|2013-08-01|Microsoft Corporation|Techniques for testing rule-based query transformation and generation| US9002554B2|2012-05-09|2015-04-07|Innova Electronics, Inc.|Smart phone app-based remote vehicle diagnostic system and method| US9501744B1|2012-06-11|2016-11-22|Dell Software Inc.|System and method for classifying data| US9779260B1|2012-06-11|2017-10-03|Dell Software Inc.|Aggregation and classification of secure data| US9578060B1|2012-06-11|2017-02-21|Dell Software Inc.|System and method for data loss prevention across heterogeneous communications platforms| US9390240B1|2012-06-11|2016-07-12|Dell Software Inc.|System and method for querying data| US9336244B2|2013-08-09|2016-05-10|Snap-On Incorporated|Methods and systems for generating baselines regarding vehicle service request data| US9201930B1|2014-05-06|2015-12-01|Snap-On Incorporated|Methods and systems for providing an auto-generated repair-hint to a vehicle repair tool| WO2015170452A1|2014-05-08|2015-11-12|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ|In-car network system, electronic control unit and update processing method| US9813308B2|2014-06-04|2017-11-07|Verizon Patent And Licensing Inc.|Statistical monitoring of customer devices| US9349016B1|2014-06-06|2016-05-24|Dell Software Inc.|System and method for user-context-based data loss prevention| DE102014219407A1|2014-09-25|2016-03-31|Volkswagen Aktiengesellschaft|Diagnostic procedures and survey methods for vehicles| US9639995B2|2015-02-25|2017-05-02|Snap-On Incorporated|Methods and systems for generating and outputting test drive scripts for vehicles| US10326748B1|2015-02-25|2019-06-18|Quest Software Inc.|Systems and methods for event-based authentication| US10417613B1|2015-03-17|2019-09-17|Quest Software Inc.|Systems and methods of patternizing logged user-initiated events for scheduling functions| US9990506B1|2015-03-30|2018-06-05|Quest Software Inc.|Systems and methods of securing network-accessible peripheral devices| US9842220B1|2015-04-10|2017-12-12|Dell Software Inc.|Systems and methods of secure self-service access to content| US9641555B1|2015-04-10|2017-05-02|Dell Software Inc.|Systems and methods of tracking content-exposure events| US9842218B1|2015-04-10|2017-12-12|Dell Software Inc.|Systems and methods of secure self-service access to content| US9569626B1|2015-04-10|2017-02-14|Dell Software Inc.|Systems and methods of reporting content-exposure events| US9563782B1|2015-04-10|2017-02-07|Dell Software Inc.|Systems and methods of secure self-service access to content| US10536352B1|2015-08-05|2020-01-14|Quest Software Inc.|Systems and methods for tuning cross-platform data collection| US10432659B2|2015-09-11|2019-10-01|Curtail, Inc.|Implementation comparison-based security system| US10157358B1|2015-10-05|2018-12-18|Quest Software Inc.|Systems and methods for multi-stream performance patternization and interval-based prediction| US10218588B1|2015-10-05|2019-02-26|Quest Software Inc.|Systems and methods for multi-stream performance patternization and optimization of virtual meetings| US9704141B2|2015-11-05|2017-07-11|Snap-On Incorporated|Post-repair data comparison| US10516768B2|2015-11-11|2019-12-24|Snap-On Incorporated|Methods and systems for switching vehicle data transmission modes based on detecting a trigger and a request for a vehicle data message| RU2626168C2|2015-12-30|2017-07-21|Общество с ограниченной ответственностью "ТМХ-Сервис"|Method for technical diagnostics of locomotive equipment and device for its implementation| CN105740086B|2016-01-20|2019-01-08|北京京东尚科信息技术有限公司|A kind of method and device of intelligent fault diagnosis maintenance| US10462256B2|2016-02-10|2019-10-29|Curtail, Inc.|Comparison of behavioral populations for security and compliance monitoring| US10706645B1|2016-03-09|2020-07-07|Drew Technologies, Inc.|Remote diagnostic system and method| US10142391B1|2016-03-25|2018-11-27|Quest Software Inc.|Systems and methods of diagnosing down-layer performance problems via multi-stream performance patternization| CN107450507B|2016-05-31|2021-03-09|优信拍(北京)信息科技有限公司|Information processing intermediate system and method| US10049512B2|2016-06-20|2018-08-14|Ford Global Technologies, Llc|Vehicle puddle lights for onboard diagnostics projection| US20180017608A1|2016-07-12|2018-01-18|Ford Motor Company Of Canada, Limited|Electrical in-system process control tester| US10692035B2|2016-07-26|2020-06-23|Mitchell Repair Information Company, Llc|Methods and systems for tracking labor efficiency| US11222379B2|2016-12-15|2022-01-11|Snap-On Incorporated|Methods and systems for automatically generating repair orders| CN106758945A|2017-01-12|2017-05-31|中山市易达号信息技术有限公司|A kind of berth lock intelligent diagnosing method| US10628496B2|2017-03-27|2020-04-21|Dell Products, L.P.|Validating and correlating content| WO2019169147A1|2018-03-02|2019-09-06|General Electric Company|System and method for maintenance of a fleet of machines|
法律状态:
2018-12-18| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2019-11-12| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-01-12| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-02-09| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 20/02/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US13/031,565|US20120215491A1|2011-02-21|2011-02-21|Diagnostic Baselining| US13/031,565|2011-02-21| PCT/US2012/025802|WO2012115899A2|2011-02-21|2012-02-20|Diagnostic baselining| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|